5 Search Results for "Vee, Erik"


Document
Track A: Algorithms, Complexity and Games
Efficient Caching with Reserves via Marking

Authors: Sharat Ibrahimpur, Manish Purohit, Zoya Svitkina, Erik Vee, and Joshua R. Wang

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
Online caching is among the most fundamental and well-studied problems in the area of online algorithms. Innovative algorithmic ideas and analysis - including potential functions and primal-dual techniques - give insight into this still-growing area. Here, we introduce a new analysis technique that first uses a potential function to upper bound the cost of an online algorithm and then pairs that with a new dual-fitting strategy to lower bound the cost of an offline optimal algorithm. We apply these techniques to the Caching with Reserves problem recently introduced by Ibrahimpur et al. [Ibrahimpur et al., 2022] and give an O(log k)-competitive fractional online algorithm via a marking strategy, where k denotes the size of the cache. We also design a new online rounding algorithm that runs in polynomial time to obtain an O(log k)-competitive randomized integral algorithm. Additionally, we provide a new, simple proof for randomized marking for the classical unweighted paging problem.

Cite as

Sharat Ibrahimpur, Manish Purohit, Zoya Svitkina, Erik Vee, and Joshua R. Wang. Efficient Caching with Reserves via Marking. In 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 261, pp. 80:1-80:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{ibrahimpur_et_al:LIPIcs.ICALP.2023.80,
  author =	{Ibrahimpur, Sharat and Purohit, Manish and Svitkina, Zoya and Vee, Erik and Wang, Joshua R.},
  title =	{{Efficient Caching with Reserves via Marking}},
  booktitle =	{50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)},
  pages =	{80:1--80:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-278-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{261},
  editor =	{Etessami, Kousha and Feige, Uriel and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2023.80},
  URN =		{urn:nbn:de:0030-drops-181328},
  doi =		{10.4230/LIPIcs.ICALP.2023.80},
  annote =	{Keywords: Approximation Algorithms, Online Algorithms, Caching}
}
Document
APPROX
Caching with Reserves

Authors: Sharat Ibrahimpur, Manish Purohit, Zoya Svitkina, Erik Vee, and Joshua R. Wang

Published in: LIPIcs, Volume 245, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)


Abstract
Caching is among the most well-studied topics in algorithm design, in part because it is such a fundamental component of many computer systems. Much of traditional caching research studies cache management for a single-user or single-processor environment. In this paper, we propose two related generalizations of the classical caching problem that capture issues that arise in a multi-user or multi-processor environment. In the caching with reserves problem, a caching algorithm is required to maintain at least k_i pages belonging to user i in the cache at any time, for some given reserve capacities k_i. In the public-private caching problem, the cache of total size k is partitioned into subcaches, a private cache of size k_i for each user i and a shared public cache usable by any user. In both of these models, as in the classical caching framework, the objective of the algorithm is to dynamically maintain the cache so as to minimize the total number of cache misses. We show that caching with reserves and public-private caching models are equivalent up to constant factors, and thus focus on the former. Unlike classical caching, both of these models turn out to be NP-hard even in the offline setting, where the page sequence is known in advance. For the offline setting, we design a 2-approximation algorithm, whose analysis carefully keeps track of a potential function to bound the cost. In the online setting, we first design an O(ln k)-competitive fractional algorithm using the primal-dual framework, and then show how to convert it online to a randomized integral algorithm with the same guarantee.

Cite as

Sharat Ibrahimpur, Manish Purohit, Zoya Svitkina, Erik Vee, and Joshua R. Wang. Caching with Reserves. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 245, pp. 52:1-52:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{ibrahimpur_et_al:LIPIcs.APPROX/RANDOM.2022.52,
  author =	{Ibrahimpur, Sharat and Purohit, Manish and Svitkina, Zoya and Vee, Erik and Wang, Joshua R.},
  title =	{{Caching with Reserves}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)},
  pages =	{52:1--52:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-249-5},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{245},
  editor =	{Chakrabarti, Amit and Swamy, Chaitanya},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2022.52},
  URN =		{urn:nbn:de:0030-drops-171741},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2022.52},
  annote =	{Keywords: Approximation Algorithms, Online Algorithms, Caching}
}
Document
Online Metric Allocation and Time-Varying Regularization

Authors: Nikhil Bansal and Christian Coester

Published in: LIPIcs, Volume 244, 30th Annual European Symposium on Algorithms (ESA 2022)


Abstract
We introduce a general online allocation problem that connects several of the most fundamental problems in online optimization. Let M be an n-point metric space. Consider a resource that can be allocated in arbitrary fractions to the points of M. At each time t, a convex monotone cost function c_t: [0,1] → ℝ_+ appears at some point r_t ∈ M. In response, an algorithm may change the allocation of the resource, paying movement cost as determined by the metric and service cost c_t(x_{r_t}), where x_{r_t} is the fraction of the resource at r_t at the end of time t. For example, when the cost functions are c_t(x) = α x, this is equivalent to randomized MTS, and when the cost functions are c_t(x) = ∞⋅1_{x < 1/k}, this is equivalent to fractional k-server. Because of an inherent scale-freeness property of the problem, existing techniques for MTS and k-server fail to achieve similar guarantees for metric allocation. To handle this, we consider a generalization of the online multiplicative update method where we decouple the rate at which a variable is updated from its value, resulting in interesting new dynamics. We use this to give an O(log n)-competitive algorithm for weighted star metrics. We then show how this corresponds to an extension of the online mirror descent framework to a setting where the regularizer is time-varying. Using this perspective, we further refine the guarantees of our algorithm. We also consider the case of non-convex cost functions. Using a simple 𝓁₂²-regularizer, we give tight bounds of Θ(n) on tree metrics, which imply deterministic and randomized competitive ratios of O(n²) and O(nlog n) respectively on arbitrary metrics.

Cite as

Nikhil Bansal and Christian Coester. Online Metric Allocation and Time-Varying Regularization. In 30th Annual European Symposium on Algorithms (ESA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 244, pp. 13:1-13:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bansal_et_al:LIPIcs.ESA.2022.13,
  author =	{Bansal, Nikhil and Coester, Christian},
  title =	{{Online Metric Allocation and Time-Varying Regularization}},
  booktitle =	{30th Annual European Symposium on Algorithms (ESA 2022)},
  pages =	{13:1--13:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-247-1},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{244},
  editor =	{Chechik, Shiri and Navarro, Gonzalo and Rotenberg, Eva and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2022.13},
  URN =		{urn:nbn:de:0030-drops-169515},
  doi =		{10.4230/LIPIcs.ESA.2022.13},
  annote =	{Keywords: Online algorithms, competitive analysis, k-server, metrical task systems, mirror descent, regularization}
}
Document
Scheduling with Communication Delay in Near-Linear Time

Authors: Quanquan C. Liu, Manish Purohit, Zoya Svitkina, Erik Vee, and Joshua R. Wang

Published in: LIPIcs, Volume 219, 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)


Abstract
We consider the problem of efficiently scheduling jobs with precedence constraints on a set of identical machines in the presence of a uniform communication delay. Such precedence-constrained jobs can be modeled as a directed acyclic graph, G = (V, E). In this setting, if two precedence-constrained jobs u and v, with v dependent on u (u ≺ v), are scheduled on different machines, then v must start at least ρ time units after u completes. The scheduling objective is to minimize makespan, i.e. the total time from when the first job starts to when the last job finishes. The focus of this paper is to provide an efficient approximation algorithm with near-linear running time. We build on the algorithm of Lepere and Rapine [STACS 2002] for this problem to give an O((ln ρ)/(ln ln ρ))-approximation algorithm that runs in Õ(|V|+|E|) time.

Cite as

Quanquan C. Liu, Manish Purohit, Zoya Svitkina, Erik Vee, and Joshua R. Wang. Scheduling with Communication Delay in Near-Linear Time. In 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 219, pp. 47:1-47:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.STACS.2022.47,
  author =	{Liu, Quanquan C. and Purohit, Manish and Svitkina, Zoya and Vee, Erik and Wang, Joshua R.},
  title =	{{Scheduling with Communication Delay in Near-Linear Time}},
  booktitle =	{39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)},
  pages =	{47:1--47:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-222-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{219},
  editor =	{Berenbrink, Petra and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2022.47},
  URN =		{urn:nbn:de:0030-drops-158570},
  doi =		{10.4230/LIPIcs.STACS.2022.47},
  annote =	{Keywords: near-linear time scheduling, scheduling with duplication, precedence-constrained jobs, graph algorithms}
}
Document
Semi-Online Bipartite Matching

Authors: Ravi Kumar, Manish Purohit, Aaron Schild, Zoya Svitkina, and Erik Vee

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
In this paper we introduce the semi-online model that generalizes the classical online computational model. The semi-online model postulates that the unknown future has a predictable part and an adversarial part; these parts can be arbitrarily interleaved. An algorithm in this model operates as in the standard online model, i.e., makes an irrevocable decision at each step. We consider bipartite matching in the semi-online model. Our main contributions are competitive algorithms for this problem and a near-matching hardness bound. The competitive ratio of the algorithms nicely interpolates between the truly offline setting (i.e., no adversarial part) and the truly online setting (i.e., no predictable part).

Cite as

Ravi Kumar, Manish Purohit, Aaron Schild, Zoya Svitkina, and Erik Vee. Semi-Online Bipartite Matching. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 50:1-50:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.ITCS.2019.50,
  author =	{Kumar, Ravi and Purohit, Manish and Schild, Aaron and Svitkina, Zoya and Vee, Erik},
  title =	{{Semi-Online Bipartite Matching}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{50:1--50:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.50},
  URN =		{urn:nbn:de:0030-drops-101436},
  doi =		{10.4230/LIPIcs.ITCS.2019.50},
  annote =	{Keywords: Semi-Online Algorithms, Bipartite Matching}
}
  • Refine by Author
  • 4 Purohit, Manish
  • 4 Svitkina, Zoya
  • 4 Vee, Erik
  • 3 Wang, Joshua R.
  • 2 Ibrahimpur, Sharat
  • Show More...

  • Refine by Classification
  • 2 Theory of computation → Caching and paging algorithms
  • 2 Theory of computation → Online algorithms
  • 1 Theory of computation → K-server algorithms
  • 1 Theory of computation → Scheduling algorithms

  • Refine by Keyword
  • 2 Approximation Algorithms
  • 2 Caching
  • 2 Online Algorithms
  • 1 Bipartite Matching
  • 1 Online algorithms
  • Show More...

  • Refine by Type
  • 5 document

  • Refine by Publication Year
  • 3 2022
  • 1 2019
  • 1 2023

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail