245 Search Results for "Guruswami, Venkatesan"


Volume

LIPIcs, Volume 287

15th Innovations in Theoretical Computer Science Conference (ITCS 2024)

ITCS 2024, January 30 to February 2, 2024, Berkeley, CA, USA

Editors: Venkatesan Guruswami

Volume

LIPIcs, Volume 250

42nd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2022)

FSTTCS 2022, December 18-20, 2022, IIT Madras, Chennai, India

Editors: Anuj Dawar and Venkatesan Guruswami

Document
Worst-Case to Expander-Case Reductions: Derandomized and Generalized

Authors: Amir Abboud and Nathan Wallheimer

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
A recent paper by Abboud and Wallheimer [ITCS 2023] presents self-reductions for various fundamental graph problems, which transform worst-case instances to expanders, thus proving that the complexity remains unchanged if the input is assumed to be an expander. An interesting corollary of their self-reductions is that if some problem admits such reduction, then the popular algorithmic paradigm based on expander-decompositions is useless against it. In this paper, we improve their core gadget, which augments a graph to make it an expander while retaining its important structure. Our new core construction has the benefit of being simple to analyze and generalize while obtaining the following results: - A derandomization of the self-reductions, showing that the equivalence between worst-case and expander-case holds even for deterministic algorithms, and ruling out the use of expander-decompositions as a derandomization tool. - An extension of the results to other models of computation, such as the Fully Dynamic model and the Congested Clique model. In the former, we either improve or provide an alternative approach to some recent hardness results for dynamic expander graphs by Henzinger, Paz, and Sricharan [ESA 2022]. In addition, we continue this line of research by designing new self-reductions for more problems, such as Max-Cut and dynamic Densest Subgraph, and demonstrating that the core gadget can be utilized to lift lower bounds based on the OMv Conjecture to expanders.

Cite as

Amir Abboud and Nathan Wallheimer. Worst-Case to Expander-Case Reductions: Derandomized and Generalized. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 4:1-4:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ESA.2024.4,
  author =	{Abboud, Amir and Wallheimer, Nathan},
  title =	{{Worst-Case to Expander-Case Reductions: Derandomized and Generalized}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{4:1--4:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.4},
  URN =		{urn:nbn:de:0030-drops-210751},
  doi =		{10.4230/LIPIcs.ESA.2024.4},
  annote =	{Keywords: Fine-grained complexity, expander graphs, self-reductions, worst-case to expander-case, expander decomposition, dynamic algorithms, exact and parameterized complexity, max-cut, maximum matching, k-clique detection, densest subgraph}
}
Document
Outlier Robust Multivariate Polynomial Regression

Authors: Vipul Arora, Arnab Bhattacharyya, Mathews Boban, Venkatesan Guruswami, and Esty Kelman

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We study the problem of robust multivariate polynomial regression: let p: ℝⁿ → ℝ be an unknown n-variate polynomial of degree at most d in each variable. We are given as input a set of random samples (𝐱_i,y_i) ∈ [-1,1]ⁿ × ℝ that are noisy versions of (𝐱_i,p(𝐱_i)). More precisely, each 𝐱_i is sampled independently from some distribution χ on [-1,1]ⁿ, and for each i independently, y_i is arbitrary (i.e., an outlier) with probability at most ρ < 1/2, and otherwise satisfies |y_i-p(𝐱_i)| ≤ σ. The goal is to output a polynomial p̂, of degree at most d in each variable, within an 𝓁_∞-distance of at most O(σ) from p. Kane, Karmalkar, and Price [FOCS'17] solved this problem for n = 1. We generalize their results to the n-variate setting, showing an algorithm that achieves a sample complexity of O_n(dⁿlog d), where the hidden constant depends on n, if χ is the n-dimensional Chebyshev distribution. The sample complexity is O_n(d^{2n}log d), if the samples are drawn from the uniform distribution instead. The approximation error is guaranteed to be at most O(σ), and the run-time depends on log(1/σ). In the setting where each 𝐱_i and y_i are known up to N bits of precision, the run-time’s dependence on N is linear. We also show that our sample complexities are optimal in terms of dⁿ. Furthermore, we show that it is possible to have the run-time be independent of 1/σ, at the cost of a higher sample complexity.

Cite as

Vipul Arora, Arnab Bhattacharyya, Mathews Boban, Venkatesan Guruswami, and Esty Kelman. Outlier Robust Multivariate Polynomial Regression. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 12:1-12:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{arora_et_al:LIPIcs.ESA.2024.12,
  author =	{Arora, Vipul and Bhattacharyya, Arnab and Boban, Mathews and Guruswami, Venkatesan and Kelman, Esty},
  title =	{{Outlier Robust Multivariate Polynomial Regression}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{12:1--12:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.12},
  URN =		{urn:nbn:de:0030-drops-210830},
  doi =		{10.4230/LIPIcs.ESA.2024.12},
  annote =	{Keywords: Robust Statistics, Polynomial Regression, Sample Efficient Learning}
}
Document
Longest Common Substring with Gaps and Related Problems

Authors: Aranya Banerjee, Daniel Gibney, and Sharma V. Thankachan

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
The longest common substring (also known as longest common factor) and longest common subsequence problems are two well-studied classical string problems. The former is solvable in optimal 𝒪(n) time for two strings of length m and n with m ≤ n, and the latter is solvable in 𝒪(nm) time, which is conditionally optimal under the Strong Exponential Time Hypothesis. In this work, we study the problem of longest common factor with gaps, that is, finding a set of at most k matching substrings obeying precedence conditions with maximum total length. For k = 1, this is equivalent to the longest common factor problem, and for k = m, this is equivalent to the longest common subsequence problem. Our work demonstrates that, for constant k, this problem can be solved in strongly subquadratic time, i.e., nm^{1 - Θ(1)}. Motivated by co-linear chaining applications in Computational Biology, we further demonstrate that the longest common factor with gaps results can be extended to the case where the matches are restricted to maximal exact matches (MEMs). To further demonstrate the applicability of our techniques, we show that a similar approach can be used for a restricted version of the episode matching problem where one seeks an ordered set of at most k matches whose concatenation equals a query pattern P and the length of the substring of T containing the matches is minimized. These solutions all run in strongly subquadratic time for constant k.

Cite as

Aranya Banerjee, Daniel Gibney, and Sharma V. Thankachan. Longest Common Substring with Gaps and Related Problems. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 16:1-16:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{banerjee_et_al:LIPIcs.ESA.2024.16,
  author =	{Banerjee, Aranya and Gibney, Daniel and Thankachan, Sharma V.},
  title =	{{Longest Common Substring with Gaps and Related Problems}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{16:1--16:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.16},
  URN =		{urn:nbn:de:0030-drops-210877},
  doi =		{10.4230/LIPIcs.ESA.2024.16},
  annote =	{Keywords: Pattern Matching, Longest Common Subsequence, Episode Matching}
}
Document
On Finding Longest Palindromic Subsequences Using Longest Common Subsequences

Authors: Gerth Stølting Brodal, Rolf Fagerberg, and Casper Moldrup Rysgaard

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
Two standard textbook problems illustrating dynamic programming are to find the longest common subsequence (LCS) between two strings and to find the longest palindromic subsequence (LPS) of a single string. A popular claim is that the longest palindromic subsequence in a string can be computed as the longest common subsequence between the string and the reversed string. We prove that the correctness of this claim depends on how the longest common subsequence is computed. In particular, we prove that the classical dynamic programming solution by Wagner and Fischer [JACM 1974] for finding an LCS in fact does find an LPS, while a slightly different LCS backtracking strategy makes the algorithm fail to always report a palindrome.

Cite as

Gerth Stølting Brodal, Rolf Fagerberg, and Casper Moldrup Rysgaard. On Finding Longest Palindromic Subsequences Using Longest Common Subsequences. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 35:1-35:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{brodal_et_al:LIPIcs.ESA.2024.35,
  author =	{Brodal, Gerth St{\o}lting and Fagerberg, Rolf and Rysgaard, Casper Moldrup},
  title =	{{On Finding Longest Palindromic Subsequences Using Longest Common Subsequences}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{35:1--35:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.35},
  URN =		{urn:nbn:de:0030-drops-211068},
  doi =		{10.4230/LIPIcs.ESA.2024.35},
  annote =	{Keywords: Palindromic subsequence, longest common subsequence, dynamic programming}
}
Document
Invertible Bloom Lookup Tables with Less Memory and Randomness

Authors: Nils Fleischhacker, Kasper Green Larsen, Maciej Obremski, and Mark Simkin

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
In this work we study Invertible Bloom Lookup Tables (IBLTs) with small failure probabilities. IBLTs are highly versatile data structures that have found applications in set reconciliation protocols, error-correcting codes, and even the design of advanced cryptographic primitives. For storing n elements and ensuring correctness with probability at least 1 - δ, existing IBLT constructions require Ω(n((log(1/δ))/(log n))+1)) space and they crucially rely on fully random hash functions. We present new constructions of IBLTs that are simultaneously more space efficient and require less randomness. For storing n elements with a failure probability of at most δ, our data structure only requires O{n + log(1/δ)log log(1/δ)} space and O{log(log(n)/δ)}-wise independent hash functions. As a key technical ingredient we show that hashing n keys with any k-wise independent hash function h:U → [Cn] for some sufficiently large constant C guarantees with probability 1 - 2^{-Ω(k)} that at least n/2 keys will have a unique hash value. Proving this is non-trivial as k approaches n. We believe that the techniques used to prove this statement may be of independent interest. We apply our new IBLTs to the encrypted compression problem, recently studied by Fleischhacker, Larsen, Simkin (Eurocrypt 2023). We extend their approach to work for a more general class of encryption schemes and using our new IBLT we achieve an asymptotically better compression rate.

Cite as

Nils Fleischhacker, Kasper Green Larsen, Maciej Obremski, and Mark Simkin. Invertible Bloom Lookup Tables with Less Memory and Randomness. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 54:1-54:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{fleischhacker_et_al:LIPIcs.ESA.2024.54,
  author =	{Fleischhacker, Nils and Larsen, Kasper Green and Obremski, Maciej and Simkin, Mark},
  title =	{{Invertible Bloom Lookup Tables with Less Memory and Randomness}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{54:1--54:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.54},
  URN =		{urn:nbn:de:0030-drops-211252},
  doi =		{10.4230/LIPIcs.ESA.2024.54},
  annote =	{Keywords: Invertible Bloom Lookup Tables}
}
Document
New Algorithms and Lower Bounds for Streaming Tournaments

Authors: Prantar Ghosh and Sahil Kuchlous

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We study fundamental directed graph (digraph) problems in the streaming model. An initial investigation by Chakrabarti, Ghosh, McGregor, and Vorotnikova [SODA'20] on streaming digraphs showed that while most of these problems are provably hard in general, some of them become tractable when restricted to the well-studied class of tournament graphs where every pair of nodes shares exactly one directed edge. Thus, we focus on tournaments and improve the state of the art for multiple problems in terms of both upper and lower bounds. Our primary upper bound is a deterministic single-pass semi-streaming algorithm (using Õ(n) space for n-node graphs, where Õ(.) hides polylog(n) factors) for decomposing a tournament into strongly connected components (SCC). It improves upon the previously best-known algorithm by Baweja, Jia, and Woodruff [ITCS'22] in terms of both space and passes: for p ⩾ 1, they used (p+1) passes and Õ(n^{1+1/p}) space. We further extend our algorithm to digraphs that are close to tournaments and establish tight bounds demonstrating that the problem’s complexity grows smoothly with the "distance" from tournaments. Applying our SCC-decomposition framework, we obtain improved - and in some cases, optimal - tournament algorithms for s,t-reachability, strong connectivity, Hamiltonian paths and cycles, and feedback arc set. On the other hand, we prove lower bounds exhibiting that some well-studied problems - such as (exact) feedback arc set and s,t-distance - remain hard (require Ω(n²) space) on tournaments. Moreover, we generalize the former problem’s lower bound to establish space-approximation tradeoffs: any single-pass (1± ε)-approximation algorithm requires Ω(n/√{ε}) space. Finally, we settle the streaming complexities of two basic digraph problems studied by prior work: acyclicity testing of tournaments and sink finding in DAGs. As a whole, our collection of results contributes significantly to the growing literature on streaming digraphs.

Cite as

Prantar Ghosh and Sahil Kuchlous. New Algorithms and Lower Bounds for Streaming Tournaments. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 60:1-60:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ghosh_et_al:LIPIcs.ESA.2024.60,
  author =	{Ghosh, Prantar and Kuchlous, Sahil},
  title =	{{New Algorithms and Lower Bounds for Streaming Tournaments}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{60:1--60:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.60},
  URN =		{urn:nbn:de:0030-drops-211318},
  doi =		{10.4230/LIPIcs.ESA.2024.60},
  annote =	{Keywords: tournaments, streaming algorithms, graph algorithms, communication complexity, strongly connected components, reachability, feedback arc set}
}
Document
Optimizing Throughput and Makespan of Queuing Systems by Information Design

Authors: Svenja M. Griesbach, Max Klimm, Philipp Warode, and Theresa Ziemke

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We study the optimal provision of information for two natural performance measures of queuing systems: throughput and makespan. A set of parallel links (queues) is equipped with deterministic capacities and stochastic offsets where the latter depend on a realized state, and the number of states is assumed to be constant. A continuum of flow particles (agents) arrives at the system at a constant rate. A system operator knows the realization of the state and may (partially) reveal this information via a public signaling scheme to the flow particles. Upon arrival, the flow particles observe the signal issued by the system operator, form an updated belief about the realized state, and decide on which link they use. Inflow into a link exceeding the link’s capacity builds up in a queue that increases the cost (total travel time) on the link. Dynamic inflow rates are in a Bayesian dynamic equilibrium when the expected cost along all links with positive inflow is equal at every point in time and not larger than the expected cost of any unused link. For a given time horizon T, the throughput induced by a signaling scheme is the total volume of flow that leaves the links in the interval [0,T]. The public signaling scheme maximizing the throughput may involve irrational numbers. We provide an additive polynomial time approximation scheme (PTAS) that approximates the optimal throughput by an arbitrary additive constant ε > 0. The algorithm solves a Lagrangian dual of the signaling problem with the Ellipsoid method whose separation oracle is implemented by a cell decomposition technique. We also provide a multiplicative fully polynomial time approximation scheme (FPTAS) that does not rely on strong duality and, thus, allows to compute the optimal signals. It uses a different cell decomposition technique together with a piecewise convex under-estimator of the optimal value function. Finally, we consider the makespan of a Bayesian dynamic equilibrium which is defined as the last point in time when a total given value of flow leaves the system. Using a variational inequality argument, we show that full information revelation is a public signaling scheme that minimizes the makespan.

Cite as

Svenja M. Griesbach, Max Klimm, Philipp Warode, and Theresa Ziemke. Optimizing Throughput and Makespan of Queuing Systems by Information Design. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 62:1-62:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{griesbach_et_al:LIPIcs.ESA.2024.62,
  author =	{Griesbach, Svenja M. and Klimm, Max and Warode, Philipp and Ziemke, Theresa},
  title =	{{Optimizing Throughput and Makespan of Queuing Systems by Information Design}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{62:1--62:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.62},
  URN =		{urn:nbn:de:0030-drops-211336},
  doi =		{10.4230/LIPIcs.ESA.2024.62},
  annote =	{Keywords: Information Design, Dynamic Flows, Public Signals, Convex Envelope}
}
Document
Parameterized Complexity of MinCSP over the Point Algebra

Authors: George Osipov, Marcin Pilipczuk, and Magnus Wahlström

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
The input in the Minimum-Cost Constraint Satisfaction Problem (MinCSP) over the Point Algebra contains a set of variables, a collection of constraints of the form x < y, x = y, x ≤ y and x ≠ y, and a budget k. The goal is to check whether it is possible to assign rational values to the variables while breaking constraints of total cost at most k. This problem generalizes several prominent graph separation and transversal problems: - MinCSP({<}) is equivalent to Directed Feedback Arc Set, - MinCSP({< , ≤}) is equivalent to Directed Subset Feedback Arc Set, - MinCSP({= ,≠}) is equivalent to Edge Multicut, and - MinCSP({≤ ,≠}) is equivalent to Directed Symmetric Multicut. Apart from trivial cases, MinCSP({Γ}) for Γ ⊆ {< , = , ≤ ,≠} is NP-hard even to approximate within any constant factor under the Unique Games Conjecture. Hence, we study parameterized complexity of this problem under a natural parameterization by the solution cost k. We obtain a complete classification: if Γ ⊆ {< , = , ≤ ,≠} contains both ≤ and ≠, then MinCSP({Γ}) is W[1]-hard, otherwise it is fixed-parameter tractable. For the positive cases, we solve MinCSP({< , = ,≠}), generalizing the FPT results for Directed Feedback Arc Set and Edge Multicut as well as their weighted versions. Our algorithm works by reducing the problem into a Boolean MinCSP, which is in turn solved by flow augmentation. For the lower bounds, we prove that Directed Symmetric Multicut is W[1]-hard, solving an open problem.

Cite as

George Osipov, Marcin Pilipczuk, and Magnus Wahlström. Parameterized Complexity of MinCSP over the Point Algebra. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 93:1-93:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{osipov_et_al:LIPIcs.ESA.2024.93,
  author =	{Osipov, George and Pilipczuk, Marcin and Wahlstr\"{o}m, Magnus},
  title =	{{Parameterized Complexity of MinCSP over the Point Algebra}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{93:1--93:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.93},
  URN =		{urn:nbn:de:0030-drops-211640},
  doi =		{10.4230/LIPIcs.ESA.2024.93},
  annote =	{Keywords: parameterized complexity, constraint satisfaction, point algebra, multicut, feedback arc set}
}
Document
Transaction Fee Mechanism Design in a Post-MEV World

Authors: Maryam Bahrani, Pranav Garimidi, and Tim Roughgarden

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
The incentive-compatibility properties of blockchain transaction fee mechanisms have been investigated with passive block producers that are motivated purely by the net rewards earned at the consensus layer. This paper introduces a model of active block producers that have their own private valuations for blocks (representing, for example, additional value derived from the application layer). The block producer surplus in our model can be interpreted as one of the more common colloquial meanings of the phrase "maximal extractable value (MEV)." We first prove that transaction fee mechanism design is fundamentally more difficult with active block producers than with passive ones: With active block producers, no non-trivial or approximately welfare-maximizing transaction fee mechanism can be incentive-compatible for both users and block producers. These results can be interpreted as a mathematical justification for augmenting transaction fee mechanisms with additional components such as order flow auctions, block producer competition, trusted hardware, or cryptographic techniques. We then consider a more fine-grained model of block production that more accurately reflects current practice, in which we distinguish the roles of "searchers" (who actively identify opportunities for value extraction from the application layer and compete for the right to take advantage of them) and "proposers" (who participate directly in the blockchain protocol and make the final choice of the published block). Searchers can effectively act as an "MEV oracle" for a transaction fee mechanism, thereby enlarging the design space. Here, we first consider a TFM that is inspired by how searchers have traditionally been incorporated into the block production process, with each transaction effectively sold off to a searcher through a first-price auction. We then explore the TFM design space with searchers more generally, and design a mechanism that circumvents our impossibility results for TFMs without searchers. Our mechanism (the "SAKA" mechanism) is incentive-compatible (for users, searchers, and the block producer), sybil-proof, and guarantees roughly 50% of the maximum-possible welfare when transaction sizes are small relative to block sizes. We conclude with a matching negative result: even when transaction sizes are small, no DSIC and sybil-proof deterministic TFM can guarantee more than 50% of the maximum-possible welfare.

Cite as

Maryam Bahrani, Pranav Garimidi, and Tim Roughgarden. Transaction Fee Mechanism Design in a Post-MEV World. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 29:1-29:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bahrani_et_al:LIPIcs.AFT.2024.29,
  author =	{Bahrani, Maryam and Garimidi, Pranav and Roughgarden, Tim},
  title =	{{Transaction Fee Mechanism Design in a Post-MEV World}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{29:1--29:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.29},
  URN =		{urn:nbn:de:0030-drops-209658},
  doi =		{10.4230/LIPIcs.AFT.2024.29},
  annote =	{Keywords: MEV, Transaction Fee Mechanisms, Auctions}
}
Document
APPROX
A Logarithmic Approximation of Linearly-Ordered Colourings

Authors: Johan Håstad, Björn Martinsson, Tamio-Vesa Nakajima, and Stanislav Živný

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
A linearly ordered (LO) k-colouring of a hypergraph assigns to each vertex a colour from the set {0,1,…,k-1} in such a way that each hyperedge has a unique maximum element. Barto, Batistelli, and Berg conjectured that it is NP-hard to find an LO k-colouring of an LO 2-colourable 3-uniform hypergraph for any constant k ≥ 2 [STACS'21] but even the case k = 3 is still open. Nakajima and Živný gave polynomial-time algorithms for finding, given an LO 2-colourable 3-uniform hypergraph, an LO colouring with O^*(√n) colours [ICALP'22] and an LO colouring with O^*(n^(1/3)) colours [ACM ToCT'23]. Very recently, Louis, Newman, and Ray gave an SDP-based algorithm with O^*(n^(1/5)) colours. We present two simple polynomial-time algorithms that find an LO colouring with O(log₂(n)) colours, which is an exponential improvement.

Cite as

Johan Håstad, Björn Martinsson, Tamio-Vesa Nakajima, and Stanislav Živný. A Logarithmic Approximation of Linearly-Ordered Colourings. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 7:1-7:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hastad_et_al:LIPIcs.APPROX/RANDOM.2024.7,
  author =	{H\r{a}stad, Johan and Martinsson, Bj\"{o}rn and Nakajima, Tamio-Vesa and \v{Z}ivn\'{y}, Stanislav},
  title =	{{A Logarithmic Approximation of Linearly-Ordered Colourings}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{7:1--7:6},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.7},
  URN =		{urn:nbn:de:0030-drops-210006},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.7},
  annote =	{Keywords: Linear ordered colouring, Hypergraph, Approximation, Promise Constraint Satisfaction Problems}
}
Document
APPROX
Weighted Matching in the Random-Order Streaming and Robust Communication Models

Authors: Diba Hashemi and Weronika Wrzos-Kaminska

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We study the maximum weight matching problem in the random-order semi-streaming model and in the robust communication model. Unlike many other sublinear models, in these two frameworks, there is a large gap between the guarantees of the best known algorithms for the unweighted and weighted versions of the problem. In the random-order semi-streaming setting, the edges of an n-vertex graph arrive in a stream in a random order. The goal is to compute an approximate maximum weight matching with a single pass over the stream using O(npolylog n) space. Our main result is a (2/3-ε)-approximation algorithm for maximum weight matching in random-order streams, using space O(n log n log R), where R is the ratio between the heaviest and the lightest edge in the graph. Our result nearly matches the best known unweighted (2/3+ε₀)-approximation (where ε₀ ∼ 10^{-14} is a small constant) achieved by Assadi and Behnezhad [Assadi and Behnezhad, 2021], and significantly improves upon previous weighted results. Our techniques also extend to the related robust communication model, in which the edges of a graph are partitioned randomly between Alice and Bob. Alice sends a single message of size O(npolylog n) to Bob, who must compute an approximate maximum weight matching. We achieve a (5/6-ε)-approximation using O(n log n log R) words of communication, matching the results of Azarmehr and Behnezhad [Azarmehr and Behnezhad, 2023] for unweighted graphs.

Cite as

Diba Hashemi and Weronika Wrzos-Kaminska. Weighted Matching in the Random-Order Streaming and Robust Communication Models. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 16:1-16:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hashemi_et_al:LIPIcs.APPROX/RANDOM.2024.16,
  author =	{Hashemi, Diba and Wrzos-Kaminska, Weronika},
  title =	{{Weighted Matching in the Random-Order Streaming and Robust Communication Models}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{16:1--16:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.16},
  URN =		{urn:nbn:de:0030-drops-210097},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.16},
  annote =	{Keywords: Maximum Weight Matching, Streaming, Random-Order Streaming, Robust Communication Complexity}
}
Document
APPROX
A Constant Factor Approximation for Directed Feedback Vertex Set in Graphs of Bounded Genus

Authors: Hao Sun

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
The minimum directed feedback vertex set problem consists in finding the minimum set of vertices that should be removed in order to make a directed graph acyclic. This is a well-known NP-hard optimization problem with applications in various fields, such as VLSI chip design, bioinformatics and transaction processing deadlock prevention and node-weighted network design. We show a constant factor approximation for the directed feedback vertex set problem in graphs of bounded genus.

Cite as

Hao Sun. A Constant Factor Approximation for Directed Feedback Vertex Set in Graphs of Bounded Genus. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 18:1-18:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{sun:LIPIcs.APPROX/RANDOM.2024.18,
  author =	{Sun, Hao},
  title =	{{A Constant Factor Approximation for Directed Feedback Vertex Set in Graphs of Bounded Genus}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{18:1--18:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.18},
  URN =		{urn:nbn:de:0030-drops-210112},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.18},
  annote =	{Keywords: Feedback Vertex Set, Combinatorial Optimization, Approximation Algorithms, min-max relation, linear programming}
}
Document
APPROX
On Instance-Optimal Algorithms for a Generalization of Nuts and Bolts and Generalized Sorting

Authors: Mayank Goswami and Riko Jacob

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We generalize the classical nuts and bolts problem to a setting where the input is a collection of n nuts and m bolts, and there is no promise of any matching pairs. It is not allowed to compare a nut directly with a nut or a bolt directly with a bolt, and the goal is to perform the fewest nut-bolt comparisons to discover the partial order between the nuts and bolts. We term this problem bipartite sorting. We show that instances of bipartite sorting of the same size exhibit a wide range of complexity, and propose to perform a fine-grained analysis for this problem. We rule out straightforward notions of instance-optimality as being too stringent, and adopt a neighborhood-based definition. Our definition may be of independent interest as a unifying lens for instance-optimal algorithms for other static problems existing in literature. This includes problems like sorting (Estivill-Castro and Woods, ACM Comput. Surv. 1992), convex hull (Afshani, Barbay and Chan, JACM 2017), adaptive joins (Demaine, López-Ortiz and Munro, SODA 2000), and the recent concept of universal optimality for graphs (Haeupler, Hladík, Rozhoň, Tarjan and Tětek, 2023). As our main result on bipartite sorting, we give a randomized algorithm that is within a factor of O(log³(n+m)) of being instance-optimal w.h.p., with respect to the neighborhood-based definition. As our second contribution, we generalize bipartite sorting to DAG sorting, when the underlying DAG is not necessarily bipartite. As an unexpected consequence of a simple algorithm for DAG sorting, we rule out a potential lower bound on the widely-studied problem of sorting with priced information, posed by (Charikar, Fagin, Guruswami, Kleinberg, Raghavan and Sahai, STOC 2000). In this problem, comparing keys i and j has a known cost c_{ij} ∈ ℝ^+ ∪ {∞}, and the goal is to sort the keys in an instance-optimal way, by keeping the total cost of an algorithm as close as possible to ∑_{i=1}^{n-1} c_{x(i)x(i+1)}. Here x(1) < ⋯ < x(n) is the sorted order. While several special cases of cost functions have received a lot of attention in the community, no progress on the general version with arbitrary costs has been reported so far. One reason for this lack of progress seems to be a widely-cited Ω(n) lower bound on the competitive ratio for finding the maximum. This Ω(n) lower bound by (Gupta and Kumar, FOCS 2000) uses costs in {0,1,n, ∞}, and although not extended to sorting, this barrier seems to have stalled any progress on the general cost case. We rule out such a potential lower bound by showing the existence of an algorithm with a Õ(n^{3/4}) competitive ratio for the {0,1,n,∞} cost version. This generalizes the setting of generalized sorting proposed by (Huang, Kannan and Khanna, FOCS 2011), where the costs are either 1 or infinity, and the cost of the cheapest proof is always n-1.

Cite as

Mayank Goswami and Riko Jacob. On Instance-Optimal Algorithms for a Generalization of Nuts and Bolts and Generalized Sorting. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 23:1-23:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{goswami_et_al:LIPIcs.APPROX/RANDOM.2024.23,
  author =	{Goswami, Mayank and Jacob, Riko},
  title =	{{On Instance-Optimal Algorithms for a Generalization of Nuts and Bolts and Generalized Sorting}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{23:1--23:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.23},
  URN =		{urn:nbn:de:0030-drops-210168},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.23},
  annote =	{Keywords: Sorting, Priced Information, Instance Optimality, Nuts and Bolts}
}
  • Refine by Author
  • 52 Guruswami, Venkatesan
  • 7 Wootters, Mary
  • 5 Lee, Euiwoong
  • 5 Sandeep, Sai
  • 5 Xing, Chaoping
  • Show More...

  • Refine by Classification
  • 20 Theory of computation → Error-correcting codes
  • 15 Theory of computation → Pseudorandomness and derandomization
  • 15 Theory of computation → Streaming, sublinear and near linear time algorithms
  • 14 Theory of computation → Algebraic complexity theory
  • 14 Theory of computation → Approximation algorithms analysis
  • Show More...

  • Refine by Keyword
  • 5 Coding theory
  • 5 Constraint Satisfaction Problems
  • 5 approximation algorithms
  • 5 hardness of approximation
  • 5 lower bounds
  • Show More...

  • Refine by Type
  • 243 document
  • 2 volume

  • Refine by Publication Year
  • 150 2024
  • 53 2022
  • 7 2019
  • 6 2015
  • 6 2017
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail