19 Search Results for "Rubinstein, Aviad"


Document
Track A: Algorithms, Complexity and Games
Average-Case to (Shifted) Worst-Case Reduction for the Trace Reconstruction Problem

Authors: Ittai Rubinstein

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
In the trace reconstruction problem, one is given many outputs (called traces) of a noise channel applied to the same input message x, and is asked to recover the input message. Common noise channels studied in the context of trace reconstruction include the deletion channel which deletes each bit w.p. δ, the insertion channel which inserts a G_j i.i.d. uniformly distributed bits before each bit of the input message (where G_j is i.i.d. geometrically distributed with parameter σ) and the symmetry channel which flips each bit of the input message i.i.d. w.p. γ. De et al. and Nazarov and Peres [De et al., 2017; Nazarov and Peres, 2017] showed that any string x can be reconstructed from exp(O(n^{1/3})) traces. Holden et al. [Holden et al., 2018] adapted the techniques used to prove this upper bound, to construct an algorithm for average-case trace reconstruction from the insertion-deletion channel with a sample complexity of exp(O(log^{1/3} n)). However, it is not clear how to apply their techniques more generally and in particular for the recent worst-case upper bound of exp(Õ(n^{1/5})) shown by Chase [Chase, 2021] for the deletion channel. We prove a general reduction from the average-case to smaller instances of a problem similar to worst-case and extend Chase’s upper-bound to this problem and to symmetry and insertion channels as well. Using this reduction and generalization of Chase’s bound, we introduce an algorithm for the average-case trace reconstruction from the symmetry-insertion-deletion channel with a sample complexity of exp(Õ(log^{1/5} n)).

Cite as

Ittai Rubinstein. Average-Case to (Shifted) Worst-Case Reduction for the Trace Reconstruction Problem. In 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 261, pp. 102:1-102:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{rubinstein:LIPIcs.ICALP.2023.102,
  author =	{Rubinstein, Ittai},
  title =	{{Average-Case to (Shifted) Worst-Case Reduction for the Trace Reconstruction Problem}},
  booktitle =	{50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)},
  pages =	{102:1--102:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-278-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{261},
  editor =	{Etessami, Kousha and Feige, Uriel and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2023.102},
  URN =		{urn:nbn:de:0030-drops-181542},
  doi =		{10.4230/LIPIcs.ICALP.2023.102},
  annote =	{Keywords: Trace Reconstruction, Synchronization Channels, Computational Learning Theory, Computational Biology}
}
Document
The Complexity of Infinite-Horizon General-Sum Stochastic Games

Authors: Yujia Jin, Vidya Muthukumar, and Aaron Sidford

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
We study the complexity of computing stationary Nash equilibrium (NE) in n-player infinite-horizon general-sum stochastic games. We focus on the problem of computing NE in such stochastic games when each player is restricted to choosing a stationary policy and rewards are discounted. First, we prove that computing such NE is in PPAD (in addition to clearly being PPAD-hard). Second, we consider turn-based specializations of such games where at each state there is at most a single player that can take actions and show that these (seemingly-simpler) games remain PPAD-hard. Third, we show that under further structural assumptions on the rewards computing NE in such turn-based games is possible in polynomial time. Towards achieving these results we establish structural facts about stochastic games of broader utility, including monotonicity of utilities under single-state single-action changes and reductions to settings where each player controls a single state.

Cite as

Yujia Jin, Vidya Muthukumar, and Aaron Sidford. The Complexity of Infinite-Horizon General-Sum Stochastic Games. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 76:1-76:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{jin_et_al:LIPIcs.ITCS.2023.76,
  author =	{Jin, Yujia and Muthukumar, Vidya and Sidford, Aaron},
  title =	{{The Complexity of Infinite-Horizon General-Sum Stochastic Games}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{76:1--76:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.76},
  URN =		{urn:nbn:de:0030-drops-175791},
  doi =		{10.4230/LIPIcs.ITCS.2023.76},
  annote =	{Keywords: complexity, stochastic games, general-sum games, Nash equilibrium}
}
Document
Improved Inapproximability of VC Dimension and Littlestone’s Dimension via (Unbalanced) Biclique

Authors: Pasin Manurangsi

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
We study the complexity of computing (and approximating) VC Dimension and Littlestone’s Dimension when we are given the concept class explicitly. We give a simple reduction from Maximum (Unbalanced) Biclique problem to approximating VC Dimension and Littlestone’s Dimension. With this connection, we derive a range of hardness of approximation results and running time lower bounds. For example, under the (randomized) Gap-Exponential Time Hypothesis or the Strongish Planted Clique Hypothesis, we show a tight inapproximability result: both dimensions are hard to approximate to within a factor of o(log n) in polynomial-time. These improve upon constant-factor inapproximability results from [Pasin Manurangsi and Aviad Rubinstein, 2017].

Cite as

Pasin Manurangsi. Improved Inapproximability of VC Dimension and Littlestone’s Dimension via (Unbalanced) Biclique. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 85:1-85:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{manurangsi:LIPIcs.ITCS.2023.85,
  author =	{Manurangsi, Pasin},
  title =	{{Improved Inapproximability of VC Dimension and Littlestone’s Dimension via (Unbalanced) Biclique}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{85:1--85:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.85},
  URN =		{urn:nbn:de:0030-drops-175884},
  doi =		{10.4230/LIPIcs.ITCS.2023.85},
  annote =	{Keywords: VC Dimension, Littlestone’s Dimension, Maximum Biclique, Hardness of Approximation, Fine-Grained Complexity}
}
Document
Beyond Worst-Case Budget-Feasible Mechanism Design

Authors: Aviad Rubinstein and Junyao Zhao

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
Motivated by large-market applications such as crowdsourcing, we revisit the problem of budget-feasible mechanism design under a "small-bidder assumption". Anari, Goel, and Nikzad (2018) gave a mechanism that has optimal competitive ratio 1-1/e on worst-case instances. However, we observe that on many realistic instances, their mechanism is significantly outperformed by a simpler open clock auction by Ensthaler and Giebe (2014), although the open clock auction only achieves competitive ratio 1/2 in the worst case. Is there a mechanism that gets the best of both worlds, i.e., a mechanism that is worst-case optimal and performs favorably on realistic instances? To answer this question, we initiate the study of beyond worst-case budget-feasible mechanism design. Our first main result is the design and the analysis of a natural mechanism that gives an affirmative answer to our question above: - We prove that on every instance, our mechanism performs at least as good as all uniform mechanisms, including Anari, Goel, and Nikzad’s and Ensthaler and Giebe’s mechanisms. - Moreover, we empirically evaluate our mechanism on various realistic instances and observe that it beats the worst-case 1-1/e competitive ratio by a large margin and compares favorably to both mechanisms mentioned above. Our second main result is more interesting in theory: We show that in the semi-adversarial model of budget-smoothed analysis, where the adversary designs a single worst-case market for a distribution of budgets, our mechanism is optimal among all (including non-uniform) mechanisms; furthermore our mechanism guarantees a strictly better-than-(1-1/e) expected competitive ratio for any non-trivial budget distribution regardless of the market. (In contrast, given any bounded range of budgets, we can construct a single market where Anari, Goel, and Nikzad’s mechanism achieves only 1-1/e competitive ratio for every budget in this range.) We complement the positive result with a characterization of the worst-case markets for any given budget distribution and prove a fairly robust hardness result that holds against any budget distribution and any mechanism.

Cite as

Aviad Rubinstein and Junyao Zhao. Beyond Worst-Case Budget-Feasible Mechanism Design. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 93:1-93:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{rubinstein_et_al:LIPIcs.ITCS.2023.93,
  author =	{Rubinstein, Aviad and Zhao, Junyao},
  title =	{{Beyond Worst-Case Budget-Feasible Mechanism Design}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{93:1--93:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.93},
  URN =		{urn:nbn:de:0030-drops-175969},
  doi =		{10.4230/LIPIcs.ITCS.2023.93},
  annote =	{Keywords: Procurement auctions, Mechanism design, Beyond worst-case analysis}
}
Document
Further Collapses in TFNP

Authors: Mika Göös, Alexandros Hollender, Siddhartha Jain, Gilbert Maystre, William Pires, Robert Robere, and Ran Tao

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
We show EOPL = PLS ∩ PPAD. Here the class EOPL consists of all total search problems that reduce to the End-of-Potential-Line problem, which was introduced in the works by Hubáček and Yogev (SICOMP 2020) and Fearnley et al. (JCSS 2020). In particular, our result yields a new simpler proof of the breakthrough collapse CLS = PLS ∩ PPAD by Fearnley et al. (STOC 2021). We also prove a companion result SOPL = PLS ∩ PPADS, where SOPL is the class associated with the Sink-of-Potential-Line problem.

Cite as

Mika Göös, Alexandros Hollender, Siddhartha Jain, Gilbert Maystre, William Pires, Robert Robere, and Ran Tao. Further Collapses in TFNP. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 33:1-33:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{goos_et_al:LIPIcs.CCC.2022.33,
  author =	{G\"{o}\"{o}s, Mika and Hollender, Alexandros and Jain, Siddhartha and Maystre, Gilbert and Pires, William and Robere, Robert and Tao, Ran},
  title =	{{Further Collapses in TFNP}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{33:1--33:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.33},
  URN =		{urn:nbn:de:0030-drops-165954},
  doi =		{10.4230/LIPIcs.CCC.2022.33},
  annote =	{Keywords: TFNP, PPAD, PLS, EOPL}
}
Document
Track A: Algorithms, Complexity and Games
Explicit and Efficient Construction of Nearly Optimal Rate Codes for the Binary Deletion Channel and the Poisson Repeat Channel

Authors: Ittai Rubinstein

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
Two of the most common models for channels with synchronisation errors are the Binary Deletion Channel with parameter p (BDC_p) - a channel where every bit of the codeword is deleted i.i.d with probability p, and the Poisson Repeat Channel with parameter λ (PRC_λ) - a channel where every bit of the codeword is repeated Poisson(λ) times. Previous constructions based on synchronisation strings yielded codes with rates far lower than the capacities of these channels [Con and Shpilka, 2019; Guruswami and Li, 2018], and the only efficient construction to achieve capacity on the BDC at the time of writing this paper is based on the far more advanced methods of polar codes [Tal et al., 2021]. In this work, we present a new method for concatenating synchronisation codes and use it to construct simple and efficient encoding and decoding algorithms for both channels with nearly optimal rates.

Cite as

Ittai Rubinstein. Explicit and Efficient Construction of Nearly Optimal Rate Codes for the Binary Deletion Channel and the Poisson Repeat Channel. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 105:1-105:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{rubinstein:LIPIcs.ICALP.2022.105,
  author =	{Rubinstein, Ittai},
  title =	{{Explicit and Efficient Construction of Nearly Optimal Rate Codes for the Binary Deletion Channel and the Poisson Repeat Channel}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{105:1--105:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.105},
  URN =		{urn:nbn:de:0030-drops-164466},
  doi =		{10.4230/LIPIcs.ICALP.2022.105},
  annote =	{Keywords: Error Correcting Codes, Algorithmic Coding Theory, Binary Deletion Channel}
}
Document
Track A: Algorithms, Complexity and Games
Maximizing Non-Monotone Submodular Functions over Small Subsets: Beyond 1/2-Approximation

Authors: Aviad Rubinstein and Junyao Zhao

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
In this work we give two new algorithms that use similar techniques for (non-monotone) submodular function maximization subject to a cardinality constraint. The first is an offline fixed-parameter tractable algorithm that guarantees a 0.539-approximation for all non-negative submodular functions. The second algorithm works in the random-order streaming model. It guarantees a (1/2+c)-approximation for symmetric functions, and we complement it by showing that no space-efficient algorithm can beat 1/2 for asymmetric functions. To the best of our knowledge this is the first provable separation between symmetric and asymmetric submodular function maximization.

Cite as

Aviad Rubinstein and Junyao Zhao. Maximizing Non-Monotone Submodular Functions over Small Subsets: Beyond 1/2-Approximation. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 106:1-106:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{rubinstein_et_al:LIPIcs.ICALP.2022.106,
  author =	{Rubinstein, Aviad and Zhao, Junyao},
  title =	{{Maximizing Non-Monotone Submodular Functions over Small Subsets: Beyond 1/2-Approximation}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{106:1--106:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.106},
  URN =		{urn:nbn:de:0030-drops-164478},
  doi =		{10.4230/LIPIcs.ICALP.2022.106},
  annote =	{Keywords: Submodular optimization, Fixed-parameter tractability, Random-order streaming}
}
Document
Budget-Smoothed Analysis for Submodular Maximization

Authors: Aviad Rubinstein and Junyao Zhao

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
The greedy algorithm for monotone submodular function maximization subject to cardinality constraint is guaranteed to approximate the optimal solution to within a 1-1/e factor. Although it is well known that this guarantee is essentially tight in the worst case - for greedy and in fact any efficient algorithm, experiments show that greedy performs better in practice. We observe that for many applications in practice, the empirical distribution of the budgets (i.e., cardinality constraints) is supported on a wide range, and moreover, all the existing hardness results in theory break under a large perturbation of the budget. To understand the effect of the budget from both algorithmic and hardness perspectives, we introduce a new notion of budget-smoothed analysis. We prove that greedy is optimal for every budget distribution, and we give a characterization for the worst-case submodular functions. Based on these results, we show that on the algorithmic side, under realistic budget distributions, greedy and related algorithms enjoy provably better approximation guarantees, that hold even for worst-case functions, and on the hardness side, there exist hard functions that are fairly robust to all the budget distributions.

Cite as

Aviad Rubinstein and Junyao Zhao. Budget-Smoothed Analysis for Submodular Maximization. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 113:1-113:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{rubinstein_et_al:LIPIcs.ITCS.2022.113,
  author =	{Rubinstein, Aviad and Zhao, Junyao},
  title =	{{Budget-Smoothed Analysis for Submodular Maximization}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{113:1--113:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.113},
  URN =		{urn:nbn:de:0030-drops-157095},
  doi =		{10.4230/LIPIcs.ITCS.2022.113},
  annote =	{Keywords: Submodular optimization, Beyond worst-case analysis, Greedy algorithms, Hardness of approximation}
}
Document
Invited Talk
Sublinear Algorithms for Edit Distance (Invited Talk)

Authors: Barna Saha

Published in: LIPIcs, Volume 202, 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)


Abstract
The edit distance is a way of quantifying how similar two strings are to one another by counting the minimum number of character insertions, deletions, and substitutions required to transform one string into the other. A simple dynamic programming computes the edit distance between two strings of length n in O(n²) time, and a more sophisticated algorithm runs in time O(n+t²) where t is the distance (Landau, Myers and Schmidt, SICOMP 1998). In pursuit of obtaining faster running time, the last couple of decades have seen a flurry of research on approximating edit distance, including polylogarithmic approximation in near-linear time (Andoni, Krauthgamer and Onak, FOCS 2010), and a constant-factor approximation in subquadratic time (Chakrabarty, Das, Goldenberg, Koucký and Saks, FOCS 2018). In this talk, we will discuss recent progress that goes beyond linear time, and studies sublinear time algorithms for edit distance. We will also discuss the role preprocessing might play in designing fast algorithms. This is a joint work with Elazar Goldenberg, Tomasz Kociumaka, Robert Krauthgamer, and Aviad Rubinstein.

Cite as

Barna Saha. Sublinear Algorithms for Edit Distance (Invited Talk). In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, p. 5:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{saha:LIPIcs.MFCS.2021.5,
  author =	{Saha, Barna},
  title =	{{Sublinear Algorithms for Edit Distance}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{5:1--5:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.5},
  URN =		{urn:nbn:de:0030-drops-144452},
  doi =		{10.4230/LIPIcs.MFCS.2021.5},
  annote =	{Keywords: Edit distance, sublinear algorithms, string processing}
}
Document
Track A: Algorithms, Complexity and Games
Streaming and Small Space Approximation Algorithms for Edit Distance and Longest Common Subsequence

Authors: Kuan Cheng, Alireza Farhadi, MohammadTaghi Hajiaghayi, Zhengzhong Jin, Xin Li, Aviad Rubinstein, Saeed Seddighin, and Yu Zheng

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
The edit distance (ED) and longest common subsequence (LCS) are two fundamental problems which quantify how similar two strings are to one another. In this paper, we first consider these problems in the asymmetric streaming model introduced by Andoni, Krauthgamer and Onak [Andoni et al., 2010] (FOCS'10) and Saks and Seshadhri [Saks and Seshadhri, 2013] (SODA'13). In this model we have random access to one string and streaming access the other one. Our main contribution is a constant factor approximation algorithm for ED with memory Õ(n^δ) for any constant δ > 0. In addition to this, we present an upper bound of Õ _ε(√n) on the memory needed to approximate ED or LCS within a factor 1±ε. All our algorithms are deterministic and run in polynomial time in a single pass. We further study small-space approximation algorithms for ED, LCS, and longest increasing sequence (LIS) in the non-streaming setting. Here, we design algorithms that achieve 1 ± ε approximation for all three problems, where ε > 0 can be any constant and even slightly sub-constant. Our algorithms only use poly-logarithmic space while maintaining a polynomial running time. This significantly improves previous results in terms of space complexity, where all known results need to use space at least Ω(√n). Our algorithms make novel use of triangle inequality and carefully designed recursions to save space, which can be of independent interest.

Cite as

Kuan Cheng, Alireza Farhadi, MohammadTaghi Hajiaghayi, Zhengzhong Jin, Xin Li, Aviad Rubinstein, Saeed Seddighin, and Yu Zheng. Streaming and Small Space Approximation Algorithms for Edit Distance and Longest Common Subsequence. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 54:1-54:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{cheng_et_al:LIPIcs.ICALP.2021.54,
  author =	{Cheng, Kuan and Farhadi, Alireza and Hajiaghayi, MohammadTaghi and Jin, Zhengzhong and Li, Xin and Rubinstein, Aviad and Seddighin, Saeed and Zheng, Yu},
  title =	{{Streaming and Small Space Approximation Algorithms for Edit Distance and Longest Common Subsequence}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{54:1--54:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.54},
  URN =		{urn:nbn:de:0030-drops-141236},
  doi =		{10.4230/LIPIcs.ICALP.2021.54},
  annote =	{Keywords: Edit Distance, Longest Common Subsequence, Longest Increasing Subsequence, Space Efficient Algorithm, Approximation Algorithm}
}
Document
The Strongish Planted Clique Hypothesis and Its Consequences

Authors: Pasin Manurangsi, Aviad Rubinstein, and Tselil Schramm

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
We formulate a new hardness assumption, the Strongish Planted Clique Hypothesis (SPCH), which postulates that any algorithm for planted clique must run in time n^Ω(log n) (so that the state-of-the-art running time of n^O(log n) is optimal up to a constant in the exponent). We provide two sets of applications of the new hypothesis. First, we show that SPCH implies (nearly) tight inapproximability results for the following well-studied problems in terms of the parameter k: Densest k-Subgraph, Smallest k-Edge Subgraph, Densest k-Subhypergraph, Steiner k-Forest, and Directed Steiner Network with k terminal pairs. For example, we show, under SPCH, that no polynomial time algorithm achieves o(k)-approximation for Densest k-Subgraph. This inapproximability ratio improves upon the previous best k^o(1) factor from (Chalermsook et al., FOCS 2017). Furthermore, our lower bounds hold even against fixed-parameter tractable algorithms with parameter k. Our second application focuses on the complexity of graph pattern detection. For both induced and non-induced graph pattern detection, we prove hardness results under SPCH, improving the running time lower bounds obtained by (Dalirrooyfard et al., STOC 2019) under the Exponential Time Hypothesis.

Cite as

Pasin Manurangsi, Aviad Rubinstein, and Tselil Schramm. The Strongish Planted Clique Hypothesis and Its Consequences. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 10:1-10:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{manurangsi_et_al:LIPIcs.ITCS.2021.10,
  author =	{Manurangsi, Pasin and Rubinstein, Aviad and Schramm, Tselil},
  title =	{{The Strongish Planted Clique Hypothesis and Its Consequences}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{10:1--10:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.10},
  URN =		{urn:nbn:de:0030-drops-135491},
  doi =		{10.4230/LIPIcs.ITCS.2021.10},
  annote =	{Keywords: Planted Clique, Densest k-Subgraph, Hardness of Approximation}
}
Document
Tarski’s Theorem, Supermodular Games, and the Complexity of Equilibria

Authors: Kousha Etessami, Christos Papadimitriou, Aviad Rubinstein, and Mihalis Yannakakis

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
The use of monotonicity and Tarski’s theorem in existence proofs of equilibria is very widespread in economics, while Tarski’s theorem is also often used for similar purposes in the context of verification. However, there has been relatively little in the way of analysis of the complexity of finding the fixed points and equilibria guaranteed by this result. We study a computational formalism based on monotone functions on the d-dimensional grid with sides of length N, and their fixed points, as well as the closely connected subject of supermodular games and their equilibria. It is known that finding some (any) fixed point of a monotone function can be done in time log^d N, and we show it requires at least log^2 N function evaluations already on the 2-dimensional grid, even for randomized algorithms. We show that the general Tarski problem of finding some fixed point, when the monotone function is given succinctly (by a boolean circuit), is in the class PLS of problems solvable by local search and, rather surprisingly, also in the class PPAD. Finding the greatest or least fixed point guaranteed by Tarski’s theorem, however, requires d ⋅ N steps, and is NP-hard in the white box model. For supermodular games, we show that finding an equilibrium in such games is essentially computationally equivalent to the Tarski problem, and finding the maximum or minimum equilibrium is similarly harder. Interestingly, two-player supermodular games where the strategy space of one player is one-dimensional can be solved in O(log N) steps. We also show that computing (approximating) the value of Condon’s (Shapley’s) stochastic games reduces to the Tarski problem. An important open problem highlighted by this work is proving a Ω(log^d N) lower bound for small fixed dimension d ≥ 3.

Cite as

Kousha Etessami, Christos Papadimitriou, Aviad Rubinstein, and Mihalis Yannakakis. Tarski’s Theorem, Supermodular Games, and the Complexity of Equilibria. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 18:1-18:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{etessami_et_al:LIPIcs.ITCS.2020.18,
  author =	{Etessami, Kousha and Papadimitriou, Christos and Rubinstein, Aviad and Yannakakis, Mihalis},
  title =	{{Tarski’s Theorem, Supermodular Games, and the Complexity of Equilibria}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{18:1--18:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.18},
  URN =		{urn:nbn:de:0030-drops-117037},
  doi =		{10.4230/LIPIcs.ITCS.2020.18},
  annote =	{Keywords: Tarski’s theorem, supermodular games, monotone functions, lattices, fixed points, Nash equilibria, computational complexity, PLS, PPAD, stochastic games, oracle model, lower bounds}
}
Document
Optimal Single-Choice Prophet Inequalities from Samples

Authors: Aviad Rubinstein, Jack Z. Wang, and S. Matthew Weinberg

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
We study the single-choice Prophet Inequality problem when the gambler is given access to samples. We show that the optimal competitive ratio of 1/2 can be achieved with a single sample from each distribution. When the distributions are identical, we show that for any constant ε > 0, O(n) samples from the distribution suffice to achieve the optimal competitive ratio (≈ 0.745) within (1+ε), resolving an open problem of [José R. Correa et al., 2019].

Cite as

Aviad Rubinstein, Jack Z. Wang, and S. Matthew Weinberg. Optimal Single-Choice Prophet Inequalities from Samples. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 60:1-60:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{rubinstein_et_al:LIPIcs.ITCS.2020.60,
  author =	{Rubinstein, Aviad and Wang, Jack Z. and Weinberg, S. Matthew},
  title =	{{Optimal Single-Choice Prophet Inequalities from Samples}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{60:1--60:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.60},
  URN =		{urn:nbn:de:0030-drops-117452},
  doi =		{10.4230/LIPIcs.ITCS.2020.60},
  annote =	{Keywords: Online algorithms, Probability, Optimization, Prophet inequalities, Samples, Auctions}
}
Document
Hardness of Bichromatic Closest Pair with Jaccard Similarity

Authors: Rasmus Pagh, Nina Mesing Stausholm, and Mikkel Thorup

Published in: LIPIcs, Volume 144, 27th Annual European Symposium on Algorithms (ESA 2019)


Abstract
Consider collections A and B of red and blue sets, respectively. Bichromatic Closest Pair is the problem of finding a pair from A x B that has similarity higher than a given threshold according to some similarity measure. Our focus here is the classic Jaccard similarity |a cap b|/|a cup b| for (a,b) in A x B. We consider the approximate version of the problem where we are given thresholds j_1 > j_2 and wish to return a pair from A x B that has Jaccard similarity higher than j_2 if there exists a pair in A x B with Jaccard similarity at least j_1. The classic locality sensitive hashing (LSH) algorithm of Indyk and Motwani (STOC '98), instantiated with the MinHash LSH function of Broder et al., solves this problem in Õ(n^(2-delta)) time if j_1 >= j_2^(1-delta). In particular, for delta=Omega(1), the approximation ratio j_1/j_2 = 1/j_2^delta increases polynomially in 1/j_2. In this paper we give a corresponding hardness result. Assuming the Orthogonal Vectors Conjecture (OVC), we show that there cannot be a general solution that solves the Bichromatic Closest Pair problem in O(n^(2-Omega(1))) time for j_1/j_2 = 1/j_2^o(1). Specifically, assuming OVC, we prove that for any delta>0 there exists an epsilon>0 such that Bichromatic Closest Pair with Jaccard similarity requires time Omega(n^(2-delta)) for any choice of thresholds j_2 < j_1 < 1-delta, that satisfy j_1 <= j_2^(1-epsilon).

Cite as

Rasmus Pagh, Nina Mesing Stausholm, and Mikkel Thorup. Hardness of Bichromatic Closest Pair with Jaccard Similarity. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 74:1-74:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{pagh_et_al:LIPIcs.ESA.2019.74,
  author =	{Pagh, Rasmus and Stausholm, Nina Mesing and Thorup, Mikkel},
  title =	{{Hardness of Bichromatic Closest Pair with Jaccard Similarity}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{74:1--74:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.74},
  URN =		{urn:nbn:de:0030-drops-111951},
  doi =		{10.4230/LIPIcs.ESA.2019.74},
  annote =	{Keywords: fine-grained complexity, set similarity search, bichromatic closest pair, jaccard similarity}
}
Document
Track A: Algorithms, Complexity and Games
Unique End of Potential Line

Authors: John Fearnley, Spencer Gordon, Ruta Mehta, and Rahul Savani

Published in: LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)


Abstract
The complexity class CLS was proposed by Daskalakis and Papadimitriou in 2011 to understand the complexity of important NP search problems that admit both path following and potential optimizing algorithms. Here we identify a subclass of CLS - called UniqueEOPL - that applies a more specific combinatorial principle that guarantees unique solutions. We show that UniqueEOPL contains several important problems such as the P-matrix Linear Complementarity Problem, finding Fixed Point of Contraction Maps, and solving Unique Sink Orientations (USOs). UniqueEOPL seems to a proper subclass of CLS and looks more likely to be the right class for the problems of interest. We identify a problem - closely related to solving contraction maps and USOs - that is complete for UniqueEOPL. Our results also give the fastest randomised algorithm for P-matrix LCP.

Cite as

John Fearnley, Spencer Gordon, Ruta Mehta, and Rahul Savani. Unique End of Potential Line. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 56:1-56:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{fearnley_et_al:LIPIcs.ICALP.2019.56,
  author =	{Fearnley, John and Gordon, Spencer and Mehta, Ruta and Savani, Rahul},
  title =	{{Unique End of Potential Line}},
  booktitle =	{46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
  pages =	{56:1--56:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-109-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{132},
  editor =	{Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.56},
  URN =		{urn:nbn:de:0030-drops-106327},
  doi =		{10.4230/LIPIcs.ICALP.2019.56},
  annote =	{Keywords: P-matrix linear complementarity problem, unique sink orientation, contraction map, TFNP, total search problems, continuous local search}
}
  • Refine by Author
  • 11 Rubinstein, Aviad
  • 3 Zhao, Junyao
  • 2 Manurangsi, Pasin
  • 2 Rubinstein, Ittai
  • 2 Schramm, Tselil
  • Show More...

  • Refine by Classification
  • 5 Theory of computation → Problems, reductions and completeness
  • 2 Theory of computation → Complexity classes
  • 2 Theory of computation → Submodular optimization and polymatroids
  • 1 Mathematics of computing → Coding theory
  • 1 Theory of computation
  • Show More...

  • Refine by Keyword
  • 2 Beyond worst-case analysis
  • 2 Hardness of Approximation
  • 2 Longest Common Subsequence
  • 2 PLS
  • 2 PPAD
  • Show More...

  • Refine by Type
  • 19 document

  • Refine by Publication Year
  • 4 2022
  • 4 2023
  • 3 2021
  • 2 2017
  • 2 2018
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail