12 Search Results for "Mohanty, Sidhanth"


Document
Asymptotically-Good RLCCs with (log n)^(2+o(1)) Queries

Authors: Gil Cohen and Tal Yankovitz

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
Recently, Kumar and Mon reached a significant milestone by constructing asymptotically good relaxed locally correctable codes (RLCCs) with poly-logarithmic query complexity. Specifically, they constructed n-bit RLCCs with O(log^{69} n) queries. Their construction relies on a clever reduction to locally testable codes (LTCs), capitalizing on recent breakthrough works in LTCs. As for lower bounds, Gur and Lachish (SICOMP 2021) proved that any asymptotically-good RLCC must make Ω̃(√{log n}) queries. Hence emerges the intriguing question regarding the identity of the least value 1/2 ≤ e ≤ 69 for which asymptotically-good RLCCs with query complexity (log n)^{e+o(1)} exist. In this work, we make substantial progress in narrowing the gap by devising asymptotically-good RLCCs with a query complexity of (log n)^{2+o(1)}. The key insight driving our work lies in recognizing that the strong guarantee of local testability overshoots the requirements for the Kumar-Mon reduction. In particular, we prove that we can replace the LTCs by "vanilla" expander codes which indeed have the necessary property: local testability in the code’s vicinity.

Cite as

Gil Cohen and Tal Yankovitz. Asymptotically-Good RLCCs with (log n)^(2+o(1)) Queries. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 8:1-8:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:LIPIcs.CCC.2024.8,
  author =	{Cohen, Gil and Yankovitz, Tal},
  title =	{{Asymptotically-Good RLCCs with (log n)^(2+o(1)) Queries}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{8:1--8:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.8},
  URN =		{urn:nbn:de:0030-drops-204045},
  doi =		{10.4230/LIPIcs.CCC.2024.8},
  annote =	{Keywords: Relaxed locally decodable codes, Relxaed locally correctable codes, RLCC, RLDC}
}
Document
Finding Missing Items Requires Strong Forms of Randomness

Authors: Amit Chakrabarti and Manuel Stoeckl

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
Adversarially robust streaming algorithms are required to process a stream of elements and produce correct outputs, even when each stream element can be chosen as a function of earlier algorithm outputs. As with classic streaming algorithms, which must only be correct for the worst-case fixed stream, adversarially robust algorithms with access to randomness can use significantly less space than deterministic algorithms. We prove that for the Missing Item Finding problem in streaming, the space complexity also significantly depends on how adversarially robust algorithms are permitted to use randomness. (In contrast, the space complexity of classic streaming algorithms does not depend as strongly on the way randomness is used.) For Missing Item Finding on streams of length 𝓁 with elements in {1,…,n}, and ≤ 1/poly(𝓁) error, we show that when 𝓁 = O(2^√{log n}), "random seed" adversarially robust algorithms, which only use randomness at initialization, require 𝓁^Ω(1) bits of space, while "random tape" adversarially robust algorithms, which may make random decisions at any time, may use O(polylog(𝓁)) random bits. When 𝓁 is between n^Ω(1) and O(√n), "random tape" adversarially robust algorithms need 𝓁^Ω(1) space, while "random oracle" adversarially robust algorithms, which can read from a long random string for free, may use O(polylog(𝓁)) space. The space lower bound for the "random seed" case follows, by a reduction given in prior work, from a lower bound for pseudo-deterministic streaming algorithms given in this paper.

Cite as

Amit Chakrabarti and Manuel Stoeckl. Finding Missing Items Requires Strong Forms of Randomness. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 28:1-28:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chakrabarti_et_al:LIPIcs.CCC.2024.28,
  author =	{Chakrabarti, Amit and Stoeckl, Manuel},
  title =	{{Finding Missing Items Requires Strong Forms of Randomness}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{28:1--28:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.28},
  URN =		{urn:nbn:de:0030-drops-204242},
  doi =		{10.4230/LIPIcs.CCC.2024.28},
  annote =	{Keywords: Data streaming, lower bounds, space complexity, adversarial robustness, derandomization, sketching, sampling}
}
Document
Track A: Algorithms, Complexity and Games
Optimal PSPACE-Hardness of Approximating Set Cover Reconfiguration

Authors: Shuichi Hirahara and Naoto Ohsaka

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
In the Minmax Set Cover Reconfiguration problem, given a set system ℱ over a universe 𝒰 and its two covers 𝒞^start and 𝒞^goal of size k, we wish to transform 𝒞^start into 𝒞^goal by repeatedly adding or removing a single set of ℱ while covering the universe 𝒰 in any intermediate state. Then, the objective is to minimize the maximum size of any intermediate cover during transformation. We prove that Minmax Set Cover Reconfiguration and Minmax Dominating Set Reconfiguration are PSPACE-hard to approximate within a factor of 2-(1/polyloglog N), where N is the size of the universe and the number of vertices in a graph, respectively, improving upon Ohsaka (SODA 2024) [Ohsaka, 2024] and Karthik C. S. and Manurangsi (2023) [Karthik C. S. and Manurangsi, 2023]. This is the first result that exhibits a sharp threshold for the approximation factor of any reconfiguration problem because both problems admit a 2-factor approximation algorithm as per Ito, Demaine, Harvey, Papadimitriou, Sideri, Uehara, and Uno (Theor. Comput. Sci., 2011) [Takehiro Ito et al., 2011]. Our proof is based on a reconfiguration analogue of the FGLSS reduction [Feige et al., 1996] from Probabilistically Checkable Reconfiguration Proofs of Hirahara and Ohsaka (STOC 2024) [Hirahara and Ohsaka, 2024]. We also prove that for any constant ε ∈ (0,1), Minmax Hypergraph Vertex Cover Reconfiguration on poly(ε^-1)-uniform hypergraphs is PSPACE-hard to approximate within a factor of 2-ε.

Cite as

Shuichi Hirahara and Naoto Ohsaka. Optimal PSPACE-Hardness of Approximating Set Cover Reconfiguration. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 85:1-85:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.ICALP.2024.85,
  author =	{Hirahara, Shuichi and Ohsaka, Naoto},
  title =	{{Optimal PSPACE-Hardness of Approximating Set Cover Reconfiguration}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{85:1--85:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.85},
  URN =		{urn:nbn:de:0030-drops-202283},
  doi =		{10.4230/LIPIcs.ICALP.2024.85},
  annote =	{Keywords: reconfiguration problems, hardness of approximation, probabilistic proof systems, FGLSS reduction}
}
Document
Track A: Algorithms, Complexity and Games
Alphabet Reduction for Reconfiguration Problems

Authors: Naoto Ohsaka

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We present a reconfiguration analogue of alphabet reduction à la Dinur (J. ACM, 2007) and its applications. Given a binary constraint graph G and its two satisfying assignments ψ^ini and ψ^tar, the Maxmin 2-CSP Reconfiguration problem requests to transform ψ^ini into ψ^tar by repeatedly changing the value of a single vertex so that the minimum fraction of satisfied edges is maximized. We demonstrate a polynomial-time reduction from Maxmin 2-CSP Reconfiguration with arbitrarily large alphabet size W ∈ ℕ to itself with universal alphabet size W₀ ∈ ℕ such that 1) the perfect completeness is preserved, and 2) if any reconfiguration for the former violates ε-fraction of edges, then Ω(ε)-fraction of edges must be unsatisfied during any reconfiguration for the latter. The crux of its construction is the reconfigurability of Hadamard codes, which enables to reconfigure between a pair of codewords, while avoiding getting too close to the other codewords. Combining this alphabet reduction with gap amplification due to Ohsaka (SODA 2024), we are able to amplify the 1 vs. 1-ε gap for arbitrarily small ε ∈ (0,1) up to the 1 vs. 1-ε₀ for some universal ε₀ ∈ (0,1) without blowing up the alphabet size. In particular, a 1 vs. 1-ε₀ gap version of Maxmin 2-CSP Reconfiguration with alphabet size W₀ is PSPACE-hard given a probabilistically checkable reconfiguration proof system having any soundness error 1-ε due to Hirahara and Ohsaka (STOC 2024) and Karthik C. S. and Manurangsi (2023). As an immediate corollary, we show that there exists a universal constant ε₀ ∈ (0,1) such that many popular reconfiguration problems are PSPACE-hard to approximate within a factor of 1-ε₀, including those of 3-SAT, Independent Set, Vertex Cover, Clique, Dominating Set, and Set Cover. This may not be achieved only by gap amplification of Ohsaka, which makes the alphabet size gigantic depending on ε^-1.

Cite as

Naoto Ohsaka. Alphabet Reduction for Reconfiguration Problems. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 113:1-113:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ohsaka:LIPIcs.ICALP.2024.113,
  author =	{Ohsaka, Naoto},
  title =	{{Alphabet Reduction for Reconfiguration Problems}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{113:1--113:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.113},
  URN =		{urn:nbn:de:0030-drops-202560},
  doi =		{10.4230/LIPIcs.ICALP.2024.113},
  annote =	{Keywords: reconfiguration problems, hardness of approximation, Hadamard codes, alphabet reduction}
}
Document
𝓁_p-Spread and Restricted Isometry Properties of Sparse Random Matrices

Authors: Venkatesan Guruswami, Peter Manohar, and Jonathan Mosheiff

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
Random subspaces X of ℝⁿ of dimension proportional to n are, with high probability, well-spread with respect to the 𝓁₂-norm. Namely, every nonzero x ∈ X is "robustly non-sparse" in the following sense: x is ε ‖x‖₂-far in 𝓁₂-distance from all δ n-sparse vectors, for positive constants ε, δ bounded away from 0. This "𝓁₂-spread" property is the natural counterpart, for subspaces over the reals, of the minimum distance of linear codes over finite fields, and corresponds to X being a Euclidean section of the 𝓁₁ unit ball. Explicit 𝓁₂-spread subspaces of dimension Ω(n), however, are unknown, and the best known explicit constructions (which achieve weaker spread properties), are analogs of low density parity check (LDPC) codes over the reals, i.e., they are kernels of certain sparse matrices. Motivated by this, we study the spread properties of the kernels of sparse random matrices. We prove that with high probability such subspaces contain vectors x that are o(1)⋅‖x‖₂-close to o(n)-sparse with respect to the 𝓁₂-norm, and in particular are not 𝓁₂-spread. This is strikingly different from the case of random LDPC codes, whose distance is asymptotically almost as good as that of (dense) random linear codes. On the other hand, for p < 2 we prove that such subspaces are 𝓁_p-spread with high probability. The spread property of sparse random matrices thus exhibits a threshold behavior at p = 2. Our proof for p < 2 moreover shows that a random sparse matrix has the stronger restricted isometry property (RIP) with respect to the 𝓁_p norm, and in fact this follows solely from the unique expansion of a random biregular graph, yielding a somewhat unexpected generalization of a similar result for the 𝓁₁ norm [Berinde et al., 2008]. Instantiating this with suitable explicit expanders, we obtain the first explicit constructions of 𝓁_p-RIP matrices for 1 ≤ p < p₀, where 1 < p₀ < 2 is an absolute constant.

Cite as

Venkatesan Guruswami, Peter Manohar, and Jonathan Mosheiff. 𝓁_p-Spread and Restricted Isometry Properties of Sparse Random Matrices. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 7:1-7:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.CCC.2022.7,
  author =	{Guruswami, Venkatesan and Manohar, Peter and Mosheiff, Jonathan},
  title =	{{𝓁\underlinep-Spread and Restricted Isometry Properties of Sparse Random Matrices}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{7:1--7:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.7},
  URN =		{urn:nbn:de:0030-drops-165695},
  doi =		{10.4230/LIPIcs.CCC.2022.7},
  annote =	{Keywords: Spread Subspaces, Euclidean Sections, Restricted Isometry Property, Sparse Matrices}
}
Document
Certifying Solution Geometry in Random CSPs: Counts, Clusters and Balance

Authors: Jun-Ting Hsieh, Sidhanth Mohanty, and Jeff Xu

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
An active topic in the study of random constraint satisfaction problems (CSPs) is the geometry of the space of satisfying or almost satisfying assignments as the function of the density, for which a precise landscape of predictions has been made via statistical physics-based heuristics. In parallel, there has been a recent flurry of work on refuting random constraint satisfaction problems, via nailing refutation thresholds for spectral and semidefinite programming-based algorithms, and also on counting solutions to CSPs. Inspired by this, the starting point for our work is the following question: What does the solution space for a random CSP look like to an efficient algorithm? In pursuit of this inquiry, we focus on the following problems about random Boolean CSPs at the densities where they are unsatisfiable but no refutation algorithm is known. 1) Counts. For every Boolean CSP we give algorithms that with high probability certify a subexponential upper bound on the number of solutions. We also give algorithms to certify a bound on the number of large cuts in a Gaussian-weighted graph, and the number of large independent sets in a random d-regular graph. 2) Clusters. For Boolean 3CSPs we give algorithms that with high probability certify an upper bound on the number of clusters of solutions. 3) Balance. We also give algorithms that with high probability certify that there are no "unbalanced" solutions, i.e., solutions where the fraction of +1s deviates significantly from 50%. Finally, we also provide hardness evidence suggesting that our algorithms for counting are optimal.

Cite as

Jun-Ting Hsieh, Sidhanth Mohanty, and Jeff Xu. Certifying Solution Geometry in Random CSPs: Counts, Clusters and Balance. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 11:1-11:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{hsieh_et_al:LIPIcs.CCC.2022.11,
  author =	{Hsieh, Jun-Ting and Mohanty, Sidhanth and Xu, Jeff},
  title =	{{Certifying Solution Geometry in Random CSPs: Counts, Clusters and Balance}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{11:1--11:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.11},
  URN =		{urn:nbn:de:0030-drops-165735},
  doi =		{10.4230/LIPIcs.CCC.2022.11},
  annote =	{Keywords: constraint satisfaction problems, certified counting, random graphs}
}
Document
Track A: Algorithms, Complexity and Games
High-Girth Near-Ramanujan Graphs with Lossy Vertex Expansion

Authors: Theo McKenzie and Sidhanth Mohanty

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
Kahale proved that linear sized sets in d-regular Ramanujan graphs have vertex expansion at least d/2 and complemented this with construction of near-Ramanujan graphs with vertex expansion no better than d/2. However, the construction of Kahale encounters highly local obstructions to better vertex expansion. In particular, the poorly expanding sets are associated with short cycles in the graph. Thus, it is natural to ask whether the vertex expansion of high-girth Ramanujan graphs breaks past the d/2 bound. Our results are two-fold: 1) For every d = p+1 for prime p ≥ 3 and infinitely many n, we exhibit an n-vertex d-regular graph with girth Ω(log_{d-1} n) and vertex expansion of sublinear sized sets bounded by (d+1)/2 whose nontrivial eigenvalues are bounded in magnitude by 2√{d-1}+O(1/(log_{d-1} n)). 2) In any Ramanujan graph with girth Clog_{d-1} n, all sets of size bounded by n^{0.99C/4} have near-lossless vertex expansion (1-o_d(1))d. The tools in analyzing our construction include the nonbacktracking operator of an infinite graph, the Ihara-Bass formula, a trace moment method inspired by Bordenave’s proof of Friedman’s theorem [Bordenave, 2019], and a method of Kahale [Kahale, 1995] to study dispersion of eigenvalues of perturbed graphs.

Cite as

Theo McKenzie and Sidhanth Mohanty. High-Girth Near-Ramanujan Graphs with Lossy Vertex Expansion. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 96:1-96:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{mckenzie_et_al:LIPIcs.ICALP.2021.96,
  author =	{McKenzie, Theo and Mohanty, Sidhanth},
  title =	{{High-Girth Near-Ramanujan Graphs with Lossy Vertex Expansion}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{96:1--96:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.96},
  URN =		{urn:nbn:de:0030-drops-141655},
  doi =		{10.4230/LIPIcs.ICALP.2021.96},
  annote =	{Keywords: expander graphs, Ramanujan graphs, vertex expansion, girth}
}
Document
The SDP Value for Random Two-Eigenvalue CSPs

Authors: Sidhanth Mohanty, Ryan O'Donnell, and Pedro Paredes

Published in: LIPIcs, Volume 154, 37th International Symposium on Theoretical Aspects of Computer Science (STACS 2020)


Abstract
We precisely determine the SDP value (equivalently, quantum value) of large random instances of certain kinds of constraint satisfaction problems, "two-eigenvalue 2CSPs". We show this SDP value coincides with the spectral relaxation value, possibly indicating a computational threshold. Our analysis extends the previously resolved cases of random regular 2XOR and NAE-3SAT, and includes new cases such as random Sort₄ (equivalently, CHSH) and Forrelation CSPs. Our techniques include new generalizations of the nonbacktracking operator, the Ihara-Bass Formula, and the Friedman/Bordenave proof of Alon’s Conjecture.

Cite as

Sidhanth Mohanty, Ryan O'Donnell, and Pedro Paredes. The SDP Value for Random Two-Eigenvalue CSPs. In 37th International Symposium on Theoretical Aspects of Computer Science (STACS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 154, pp. 50:1-50:45, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{mohanty_et_al:LIPIcs.STACS.2020.50,
  author =	{Mohanty, Sidhanth and O'Donnell, Ryan and Paredes, Pedro},
  title =	{{The SDP Value for Random Two-Eigenvalue CSPs}},
  booktitle =	{37th International Symposium on Theoretical Aspects of Computer Science (STACS 2020)},
  pages =	{50:1--50:45},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-140-5},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{154},
  editor =	{Paul, Christophe and Bl\"{a}ser, Markus},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2020.50},
  URN =		{urn:nbn:de:0030-drops-119110},
  doi =		{10.4230/LIPIcs.STACS.2020.50},
  annote =	{Keywords: Semidefinite programming, constraint satisfaction problems}
}
Document
High-Dimensional Expanders from Expanders

Authors: Siqi Liu, Sidhanth Mohanty, and Elizabeth Yang

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
We present an elementary way to transform an expander graph into a simplicial complex where all high order random walks have a constant spectral gap, i.e., they converge rapidly to the stationary distribution. As an upshot, we obtain new constructions, as well as a natural probabilistic model to sample constant degree high-dimensional expanders. In particular, we show that given an expander graph G, adding self loops to G and taking the tensor product of the modified graph with a high-dimensional expander produces a new high-dimensional expander. Our proof of rapid mixing of high order random walks is based on the decomposable Markov chains framework introduced by [Jerrum et al., 2004].

Cite as

Siqi Liu, Sidhanth Mohanty, and Elizabeth Yang. High-Dimensional Expanders from Expanders. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 12:1-12:32, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.ITCS.2020.12,
  author =	{Liu, Siqi and Mohanty, Sidhanth and Yang, Elizabeth},
  title =	{{High-Dimensional Expanders from Expanders}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{12:1--12:32},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.12},
  URN =		{urn:nbn:de:0030-drops-116974},
  doi =		{10.4230/LIPIcs.ITCS.2020.12},
  annote =	{Keywords: High-Dimensional Expanders, Markov Chains}
}
Document
Pseudo-Deterministic Streaming

Authors: Shafi Goldwasser, Ofer Grossman, Sidhanth Mohanty, and David P. Woodruff

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
A pseudo-deterministic algorithm is a (randomized) algorithm which, when run multiple times on the same input, with high probability outputs the same result on all executions. Classic streaming algorithms, such as those for finding heavy hitters, approximate counting, ?_2 approximation, finding a nonzero entry in a vector (for turnstile algorithms) are not pseudo-deterministic. For example, in the instance of finding a nonzero entry in a vector, for any known low-space algorithm A, there exists a stream x so that running A twice on x (using different randomness) would with high probability result in two different entries as the output. In this work, we study whether it is inherent that these algorithms output different values on different executions. That is, we ask whether these problems have low-memory pseudo-deterministic algorithms. For instance, we show that there is no low-memory pseudo-deterministic algorithm for finding a nonzero entry in a vector (given in a turnstile fashion), and also that there is no low-dimensional pseudo-deterministic sketching algorithm for ?_2 norm estimation. We also exhibit problems which do have low memory pseudo-deterministic algorithms but no low memory deterministic algorithm, such as outputting a nonzero row of a matrix, or outputting a basis for the row-span of a matrix. We also investigate multi-pseudo-deterministic algorithms: algorithms which with high probability output one of a few options. We show the first lower bounds for such algorithms. This implies that there are streaming problems such that every low space algorithm for the problem must have inputs where there are many valid outputs, all with a significant probability of being outputted.

Cite as

Shafi Goldwasser, Ofer Grossman, Sidhanth Mohanty, and David P. Woodruff. Pseudo-Deterministic Streaming. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 79:1-79:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{goldwasser_et_al:LIPIcs.ITCS.2020.79,
  author =	{Goldwasser, Shafi and Grossman, Ofer and Mohanty, Sidhanth and Woodruff, David P.},
  title =	{{Pseudo-Deterministic Streaming}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{79:1--79:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.79},
  URN =		{urn:nbn:de:0030-drops-117644},
  doi =		{10.4230/LIPIcs.ITCS.2020.79},
  annote =	{Keywords: streaming, pseudo-deterministic}
}
Document
On Sketching the q to p Norms

Authors: Aditya Krishnan, Sidhanth Mohanty, and David P. Woodruff

Published in: LIPIcs, Volume 116, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)


Abstract
We initiate the study of data dimensionality reduction, or sketching, for the q -> p norms. Given an n x d matrix A, the q -> p norm, denoted |A |_{q -> p} = sup_{x in R^d \ 0} |Ax |_p / |x |_q, is a natural generalization of several matrix and vector norms studied in the data stream and sketching models, with applications to datamining, hardness of approximation, and oblivious routing. We say a distribution S on random matrices L in R^{nd} - > R^k is a (k,alpha)-sketching family if from L(A), one can approximate |A |_{q -> p} up to a factor alpha with constant probability. We provide upper and lower bounds on the sketching dimension k for every p, q in [1, infty], and in a number of cases our bounds are tight. While we mostly focus on constant alpha, we also consider large approximation factors alpha, as well as other variants of the problem such as when A has low rank.

Cite as

Aditya Krishnan, Sidhanth Mohanty, and David P. Woodruff. On Sketching the q to p Norms. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 116, pp. 15:1-15:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{krishnan_et_al:LIPIcs.APPROX-RANDOM.2018.15,
  author =	{Krishnan, Aditya and Mohanty, Sidhanth and Woodruff, David P.},
  title =	{{On Sketching the q to p Norms}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)},
  pages =	{15:1--15:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-085-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{116},
  editor =	{Blais, Eric and Jansen, Klaus and D. P. Rolim, Jos\'{e} and Steurer, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2018.15},
  URN =		{urn:nbn:de:0030-drops-94192},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2018.15},
  annote =	{Keywords: Dimensionality Reduction, Norms, Sketching, Streaming}
}
Document
Algorithms for Noisy Broadcast with Erasures

Authors: Ofer Grossman, Bernhard Haeupler, and Sidhanth Mohanty

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
The noisy broadcast model was first studied by [Gallager, 1988] where an n-character input is distributed among n processors, so that each processor receives one input bit. Computation proceeds in rounds, where in each round each processor broadcasts a single character, and each reception is corrupted independently at random with some probability p. [Gallager, 1988] gave an algorithm for all processors to learn the input in O(log log n) rounds with high probability. Later, a matching lower bound of Omega(log log n) was given by [Goyal et al., 2008]. We study a relaxed version of this model where each reception is erased and replaced with a `?' independently with probability p, so the processors have knowledge of whether a bit has been corrupted. In this relaxed model, we break past the lower bound of [Goyal et al., 2008] and obtain an O(log^* n)-round algorithm for all processors to learn the input with high probability. We also show an O(1)-round algorithm for the same problem when the alphabet size is Omega(poly(n)).

Cite as

Ofer Grossman, Bernhard Haeupler, and Sidhanth Mohanty. Algorithms for Noisy Broadcast with Erasures. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 153:1-153:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{grossman_et_al:LIPIcs.ICALP.2018.153,
  author =	{Grossman, Ofer and Haeupler, Bernhard and Mohanty, Sidhanth},
  title =	{{Algorithms for Noisy Broadcast with Erasures}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{153:1--153:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.153},
  URN =		{urn:nbn:de:0030-drops-91576},
  doi =		{10.4230/LIPIcs.ICALP.2018.153},
  annote =	{Keywords: noisy broadcast, error correction, erasures, distributed computing with noise}
}
  • Refine by Author
  • 7 Mohanty, Sidhanth
  • 2 Grossman, Ofer
  • 2 Ohsaka, Naoto
  • 2 Woodruff, David P.
  • 1 Chakrabarti, Amit
  • Show More...

  • Refine by Classification
  • 2 Theory of computation → Error-correcting codes
  • 2 Theory of computation → Expander graphs and randomness extractors
  • 2 Theory of computation → Problems, reductions and completeness
  • 2 Theory of computation → Pseudorandomness and derandomization
  • 1 Mathematics of computing → Spectra of graphs
  • Show More...

  • Refine by Keyword
  • 2 constraint satisfaction problems
  • 2 hardness of approximation
  • 2 reconfiguration problems
  • 1 Data streaming
  • 1 Dimensionality Reduction
  • Show More...

  • Refine by Type
  • 12 document

  • Refine by Publication Year
  • 4 2024
  • 3 2020
  • 2 2018
  • 2 2022
  • 1 2021