26 Search Results for "Dinur, Irit"


Document
RANDOM
Coboundary and Cosystolic Expansion Without Dependence on Dimension or Degree

Authors: Yotam Dikstein and Irit Dinur

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We give new bounds on the cosystolic expansion constants of several families of high dimensional expanders, and the known coboundary expansion constants of order complexes of homogeneous geometric lattices, including the spherical building of SL_n(𝔽_q). The improvement applies to the high dimensional expanders constructed by Lubotzky, Samuels and Vishne, and by Kaufman and Oppenheim. Our new expansion constants do not depend on the degree of the complex nor on its dimension, nor on the group of coefficients. This implies improved bounds on Gromov’s topological overlap constant, and on Dinur and Meshulam’s cover stability, which may have applications for agreement testing. In comparison, existing bounds decay exponentially with the ambient dimension (for spherical buildings) and in addition decay linearly with the degree (for all known bounded-degree high dimensional expanders). Our results are based on several new techniques: - We develop a new "color-restriction" technique which enables proving dimension-free expansion by restricting a multi-partite complex to small random subsets of its color classes. - We give a new "spectral" proof for Evra and Kaufman’s local-to-global theorem, deriving better bounds and getting rid of the dependence on the degree. This theorem bounds the cosystolic expansion of a complex using coboundary expansion and spectral expansion of the links. - We derive absolute bounds on the coboundary expansion of the spherical building (and any order complex of a homogeneous geometric lattice) by constructing a novel family of very short cones.

Cite as

Yotam Dikstein and Irit Dinur. Coboundary and Cosystolic Expansion Without Dependence on Dimension or Degree. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 62:1-62:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dikstein_et_al:LIPIcs.APPROX/RANDOM.2024.62,
  author =	{Dikstein, Yotam and Dinur, Irit},
  title =	{{Coboundary and Cosystolic Expansion Without Dependence on Dimension or Degree}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{62:1--62:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.62},
  URN =		{urn:nbn:de:0030-drops-210556},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.62},
  annote =	{Keywords: High Dimensional Expanders, HDX, Spectral Expansion, Coboundary Expansion, Cocycle Expansion, Cosystolic Expansion}
}
Document
Track A: Algorithms, Complexity and Games
Linear Relaxed Locally Decodable and Correctable Codes Do Not Need Adaptivity and Two-Sided Error

Authors: Guy Goldberg

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
Relaxed locally decodable codes (RLDCs) are error-correcting codes in which individual bits of the message can be recovered by querying only a few bits from a noisy codeword. For uncorrupted codewords, and for every bit, the decoder must decode the bit correctly with high probability. However, for a noisy codeword, a relaxed local decoder is allowed to output a "rejection" symbol, indicating that the decoding failed. We study the power of adaptivity and two-sided error for RLDCs. Our main result is that if the underlying code is linear, adaptivity and two-sided error do not give any power to relaxed local decoding. We construct a reduction from adaptive, two-sided error relaxed local decoders to non-adaptive, one-sided error ones. That is, the reduction produces a relaxed local decoder that never errs or rejects if its input is a valid codeword and makes queries based on its internal randomness (and the requested index to decode), independently of the input. The reduction essentially maintains the query complexity, requiring at most one additional query. For any input, the decoder’s error probability increases at most two-fold. Furthermore, assuming the underlying code is in systematic form, where the original message is embedded as the first bits of its encoding, the reduction also conserves both the code itself and its rate and distance properties We base the reduction on our new notion of additive promise problems. A promise problem is additive if the sum of any two YES-instances is a YES-instance and the sum of any NO-instance and a YES-instance is a NO-instance. This novel framework captures both linear RLDCs and property testing (of linear properties), despite their significant differences. We prove that in general, algorithms for any additive promise problem do not gain power from adaptivity or two-sided error, and obtain the result for RLDCs as a special case. The result also holds for relaxed locally correctable codes (RLCCs), where a codeword bit should be recovered. As an application, we improve the best known lower bound for linear adaptive RLDCs. Specifically, we prove that such codes require block length of n ≥ k^{1+Ω(1/q²)}, where k denotes the message length and q denotes the number of queries.

Cite as

Guy Goldberg. Linear Relaxed Locally Decodable and Correctable Codes Do Not Need Adaptivity and Two-Sided Error. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 74:1-74:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{goldberg:LIPIcs.ICALP.2024.74,
  author =	{Goldberg, Guy},
  title =	{{Linear Relaxed Locally Decodable and Correctable Codes Do Not Need Adaptivity and Two-Sided Error}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{74:1--74:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.74},
  URN =		{urn:nbn:de:0030-drops-202174},
  doi =		{10.4230/LIPIcs.ICALP.2024.74},
  annote =	{Keywords: Locally decodable codes, Relaxed locally correctable codes, Relaxed locally decodable codes}
}
Document
RANDOM
Fine Grained Analysis of High Dimensional Random Walks

Authors: Roy Gotlib and Tali Kaufman

Published in: LIPIcs, Volume 275, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)


Abstract
One of the most important properties of high dimensional expanders is that high dimensional random walks converge rapidly. This property has proven to be extremely useful in a variety of fields in the theory of computer science from agreement testing to sampling, coding theory and more. In this paper we present a state of the art result in a line of works analyzing the convergence of high dimensional random walks [Tali Kaufman and David Mass, 2017; Irit Dinur and Tali Kaufman, 2017; Tali Kaufman and Izhar Oppenheim, 2018; Vedat Levi Alev and Lap Chi Lau, 2020], by presenting a structured version of the result of [Vedat Levi Alev and Lap Chi Lau, 2020]. While previous works examined the expansion in the viewpoint of the worst possible eigenvalue, in this work we relate the expansion of a function to the entire spectrum of the random walk operator using the structure of the function; We call such a theorem a Fine Grained High Order Random Walk Theorem. In sufficiently structured cases the fine grained result that we present here can be much better than the worst case while in the worst case our result is equivalent to [Vedat Levi Alev and Lap Chi Lau, 2020]. In order to prove the Fine Grained High Order Random Walk Theorem we introduce a way to bootstrap the expansion of random walks on the vertices of a complex into a fine grained understanding of higher order random walks, provided that the expansion is good enough. In addition, our single bootstrapping theorem can simultaneously yield our Fine Grained High Order Random Walk Theorem as well as the well known Trickling down Theorem. Prior to this work, High order Random walks theorems and Tricking down Theorem have been obtained from different proof methods.

Cite as

Roy Gotlib and Tali Kaufman. Fine Grained Analysis of High Dimensional Random Walks. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 275, pp. 49:1-49:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{gotlib_et_al:LIPIcs.APPROX/RANDOM.2023.49,
  author =	{Gotlib, Roy and Kaufman, Tali},
  title =	{{Fine Grained Analysis of High Dimensional Random Walks}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)},
  pages =	{49:1--49:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-296-9},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{275},
  editor =	{Megow, Nicole and Smith, Adam},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2023.49},
  URN =		{urn:nbn:de:0030-drops-188740},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2023.49},
  annote =	{Keywords: High Dimensional Expanders, High Dimensional Random Walks, Local Spectral Expansion, Local to Global, Trickling Down}
}
Document
RANDOM
NP-Hardness of Almost Coloring Almost 3-Colorable Graphs

Authors: Yahli Hecht, Dor Minzer, and Muli Safra

Published in: LIPIcs, Volume 275, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)


Abstract
A graph G = (V,E) is said to be (k,δ) almost colorable if there is a subset of vertices V' ⊆ V of size at least (1-δ)|V| such that the induced subgraph of G on V' is k-colorable. We prove that for all k, there exists δ > 0 such for all ε > 0, given a graph G it is NP-hard (under randomized reductions) to distinguish between: 1) Yes case: G is (3,ε) almost colorable. 2) No case: G is not (k,δ) almost colorable. This improves upon an earlier result of Khot et al. [Irit Dinur et al., 2018], who showed a weaker result wherein in the "yes case" the graph is (4,ε) almost colorable.

Cite as

Yahli Hecht, Dor Minzer, and Muli Safra. NP-Hardness of Almost Coloring Almost 3-Colorable Graphs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 275, pp. 51:1-51:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{hecht_et_al:LIPIcs.APPROX/RANDOM.2023.51,
  author =	{Hecht, Yahli and Minzer, Dor and Safra, Muli},
  title =	{{NP-Hardness of Almost Coloring Almost 3-Colorable Graphs}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)},
  pages =	{51:1--51:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-296-9},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{275},
  editor =	{Megow, Nicole and Smith, Adam},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2023.51},
  URN =		{urn:nbn:de:0030-drops-188761},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2023.51},
  annote =	{Keywords: PCP, Hardness of approximation}
}
Document
A New Conjecture on Hardness of 2-CSP’s with Implications to Hardness of Densest k-Subgraph and Other Problems

Authors: Julia Chuzhoy, Mina Dalirrooyfard, Vadim Grinberg, and Zihan Tan

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
We propose a new conjecture on hardness of 2-CSP’s, and show that new hardness of approximation results for Densest k-Subgraph and several other problems, including a graph partitioning problem, and a variation of the Graph Crossing Number problem, follow from this conjecture. The conjecture can be viewed as occupying a middle ground between the d-to-1 conjecture, and hardness results for 2-CSP’s that can be obtained via standard techniques, such as Parallel Repetition combined with standard 2-prover protocols for the 3SAT problem. We hope that this work will motivate further exploration of hardness of 2-CSP’s in the regimes arising from the conjecture. We believe that a positive resolution of the conjecture will provide a good starting point for other hardness of approximation proofs. Another contribution of our work is proving that the problems that we consider are roughly equivalent from the approximation perspective. Some of these problems arose in previous work, from which it appeared that they may be related to each other. We formalize this relationship in this work.

Cite as

Julia Chuzhoy, Mina Dalirrooyfard, Vadim Grinberg, and Zihan Tan. A New Conjecture on Hardness of 2-CSP’s with Implications to Hardness of Densest k-Subgraph and Other Problems. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 38:1-38:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{chuzhoy_et_al:LIPIcs.ITCS.2023.38,
  author =	{Chuzhoy, Julia and Dalirrooyfard, Mina and Grinberg, Vadim and Tan, Zihan},
  title =	{{A New Conjecture on Hardness of 2-CSP’s with Implications to Hardness of Densest k-Subgraph and Other Problems}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{38:1--38:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.38},
  URN =		{urn:nbn:de:0030-drops-175411},
  doi =		{10.4230/LIPIcs.ITCS.2023.38},
  annote =	{Keywords: Hardness of Approximation, Densest k-Subgraph}
}
Document
Invited Talk
Expanders in Higher Dimensions (Invited Talk)

Authors: Irit Dinur

Published in: LIPIcs, Volume 250, 42nd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2022)


Abstract
Expander graphs have been studied in many areas of mathematics and in computer science with versatile applications, including coding theory, networking, computational complexity and geometry. High-dimensional expanders are a generalization that has been studied in recent years and their promise is beginning to bear fruit. In the talk, I will survey some powerful local to global properties of high-dimensional expanders, and describe several interesting applications, ranging from convergence of random walks to construction of locally testable codes that prove the c³ conjecture (namely, codes with constant rate, constant distance, and constant locality).

Cite as

Irit Dinur. Expanders in Higher Dimensions (Invited Talk). In 42nd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 250, p. 4:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{dinur:LIPIcs.FSTTCS.2022.4,
  author =	{Dinur, Irit},
  title =	{{Expanders in Higher Dimensions}},
  booktitle =	{42nd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2022)},
  pages =	{4:1--4:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-261-7},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{250},
  editor =	{Dawar, Anuj and Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2022.4},
  URN =		{urn:nbn:de:0030-drops-173967},
  doi =		{10.4230/LIPIcs.FSTTCS.2022.4},
  annote =	{Keywords: Expanders}
}
Document
RANDOM
Improved Bounds for Coloring Locally Sparse Hypergraphs

Authors: Fotis Iliopoulos

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
We show that, for every k ≥ 2, every k-uniform hypergaph of degree Δ and girth at least 5 is efficiently (1+o(1))(k-1) (Δ / ln Δ)^{1/(k-1)}-list colorable. As an application we obtain the currently best deterministic algorithm for list-coloring random hypergraphs of bounded average degree.

Cite as

Fotis Iliopoulos. Improved Bounds for Coloring Locally Sparse Hypergraphs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 39:1-39:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{iliopoulos:LIPIcs.APPROX/RANDOM.2021.39,
  author =	{Iliopoulos, Fotis},
  title =	{{Improved Bounds for Coloring Locally Sparse Hypergraphs}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{39:1--39:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.39},
  URN =		{urn:nbn:de:0030-drops-147328},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.39},
  annote =	{Keywords: hypergaph coloring, semi-random method, locally sparse, random hypergraphs}
}
Document
On Rich 2-to-1 Games

Authors: Mark Braverman, Subhash Khot, and Dor Minzer

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
We propose a variant of the 2-to-1 Games Conjecture that we call the Rich 2-to-1 Games Conjecture and show that it is equivalent to the Unique Games Conjecture. We are motivated by two considerations. Firstly, in light of the recent proof of the 2-to-1 Games Conjecture [Subhash Khot et al., 2017; Irit Dinur et al., 2018; Irit Dinur et al., 2018; Subhash Khot et al., 2018], we hope to understand how one might make further progress towards a proof of the Unique Games Conjecture. Secondly, the new variant along with perfect completeness in addition, might imply hardness of approximation results that necessarily require perfect completeness and (hence) are not implied by the Unique Games Conjecture.

Cite as

Mark Braverman, Subhash Khot, and Dor Minzer. On Rich 2-to-1 Games. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 27:1-27:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{braverman_et_al:LIPIcs.ITCS.2021.27,
  author =	{Braverman, Mark and Khot, Subhash and Minzer, Dor},
  title =	{{On Rich 2-to-1 Games}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{27:1--27:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.27},
  URN =		{urn:nbn:de:0030-drops-135666},
  doi =		{10.4230/LIPIcs.ITCS.2021.27},
  annote =	{Keywords: PCP, Unique-Games, Perfect Completeness}
}
Document
Explicit SoS Lower Bounds from High-Dimensional Expanders

Authors: Irit Dinur, Yuval Filmus, Prahladh Harsha, and Madhur Tulsiani

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
We construct an explicit and structured family of 3XOR instances which is hard for O(√{log n}) levels of the Sum-of-Squares hierarchy. In contrast to earlier constructions, which involve a random component, our systems are highly structured and can be constructed explicitly in deterministic polynomial time. Our construction is based on the high-dimensional expanders devised by Lubotzky, Samuels and Vishne, known as LSV complexes or Ramanujan complexes, and our analysis is based on two notions of expansion for these complexes: cosystolic expansion, and a local isoperimetric inequality due to Gromov. Our construction offers an interesting contrast to the recent work of Alev, Jeronimo and the last author (FOCS 2019). They showed that 3XOR instances in which the variables correspond to vertices in a high-dimensional expander are easy to solve. In contrast, in our instances the variables correspond to the edges of the complex.

Cite as

Irit Dinur, Yuval Filmus, Prahladh Harsha, and Madhur Tulsiani. Explicit SoS Lower Bounds from High-Dimensional Expanders. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 38:1-38:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dinur_et_al:LIPIcs.ITCS.2021.38,
  author =	{Dinur, Irit and Filmus, Yuval and Harsha, Prahladh and Tulsiani, Madhur},
  title =	{{Explicit SoS Lower Bounds from High-Dimensional Expanders}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{38:1--38:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.38},
  URN =		{urn:nbn:de:0030-drops-135774},
  doi =		{10.4230/LIPIcs.ITCS.2021.38},
  annote =	{Keywords: High-dimensional expanders, sum-of-squares, integrality gaps}
}
Document
APPROX
Revisiting Alphabet Reduction in Dinur’s PCP

Authors: Venkatesan Guruswami, Jakub Opršal, and Sai Sandeep

Published in: LIPIcs, Volume 176, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)


Abstract
Dinur’s celebrated proof of the PCP theorem alternates two main steps in several iterations: gap amplification to increase the soundness gap by a large constant factor (at the expense of much larger alphabet size), and a composition step that brings back the alphabet size to an absolute constant (at the expense of a fixed constant factor loss in the soundness gap). We note that the gap amplification can produce a Label Cover CSP. This allows us to reduce the alphabet size via a direct long-code based reduction from Label Cover to a Boolean CSP. Our composition step thus bypasses the concept of Assignment Testers from Dinur’s proof, and we believe it is more intuitive - it is just a gadget reduction. The analysis also uses only elementary facts (Parseval’s identity) about Fourier Transforms over the hypercube.

Cite as

Venkatesan Guruswami, Jakub Opršal, and Sai Sandeep. Revisiting Alphabet Reduction in Dinur’s PCP. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 176, pp. 34:1-34:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.APPROX/RANDOM.2020.34,
  author =	{Guruswami, Venkatesan and Opr\v{s}al, Jakub and Sandeep, Sai},
  title =	{{Revisiting Alphabet Reduction in Dinur’s PCP}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)},
  pages =	{34:1--34:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-164-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{176},
  editor =	{Byrka, Jaros{\l}aw and Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2020.34},
  URN =		{urn:nbn:de:0030-drops-126372},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2020.34},
  annote =	{Keywords: PCP theorem, CSP, discrete Fourier analysis, label cover, long code}
}
Document
Hardness Amplification of Optimization Problems

Authors: Elazar Goldenberg and Karthik C. S.

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products. We say that an optimization problem Π is direct product feasible if it is possible to efficiently aggregate any k instances of Π and form one large instance of Π such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem Π, our hardness amplification theorem may be informally stated as follows: If there is a distribution D over instances of Π of size n such that every randomized algorithm running in time t(n) fails to solve Π on 1/α(n) fraction of inputs sampled from D, then, assuming some relationships on α(n) and t(n), there is a distribution D' over instances of Π of size O(n⋅α(n)) such that every randomized algorithm running in time t(n)/poly(α(n)) fails to solve Π on 99/100 fraction of inputs sampled from D'. As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium.

Cite as

Elazar Goldenberg and Karthik C. S.. Hardness Amplification of Optimization Problems. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 1:1-1:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{goldenberg_et_al:LIPIcs.ITCS.2020.1,
  author =	{Goldenberg, Elazar and Karthik C. S.},
  title =	{{Hardness Amplification of Optimization Problems}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{1:1--1:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.1},
  URN =		{urn:nbn:de:0030-drops-116863},
  doi =		{10.4230/LIPIcs.ITCS.2020.1},
  annote =	{Keywords: hardness amplification, average case complexity, direct product, optimization problems, fine-grained complexity, TFNP}
}
Document
Smooth and Strong PCPs

Authors: Orr Paradise

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
Probabilistically checkable proofs (PCPs) can be verified based only on a constant amount of random queries, such that any correct claim has a proof that is always accepted, and incorrect claims are rejected with high probability (regardless of the given alleged proof). We consider two possible features of PCPs: - A PCP is strong if it rejects an alleged proof of a correct claim with probability proportional to its distance from some correct proof of that claim. - A PCP is smooth if each location in a proof is queried with equal probability. We prove that all sets in NP have PCPs that are both smooth and strong, are of polynomial length, and can be verified based on a constant number of queries. This is achieved by following the proof of the PCP theorem of Arora, Lund, Motwani, Sudan and Szegedy (JACM, 1998), providing a stronger analysis of the Hadamard and Reed - Muller based PCPs and a refined PCP composition theorem. In fact, we show that any set in NP has a smooth strong canonical PCP of Proximity (PCPP), meaning that there is an efficiently computable bijection of NP witnesses to correct proofs. This improves on the recent construction of Dinur, Gur and Goldreich (ITCS, 2019) of PCPPs that are strong canonical but inherently non-smooth. Our result implies the hardness of approximating the satisfiability of "stable" 3CNF formulae with bounded variable occurrence, where stable means that the number of clauses violated by an assignment is proportional to its distance from a satisfying assignment (in the relative Hamming metric). This proves a hypothesis used in the work of Friggstad, Khodamoradi and Salavatipour (SODA, 2019), suggesting a connection between the hardness of these instances and other stable optimization problems.

Cite as

Orr Paradise. Smooth and Strong PCPs. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 2:1-2:41, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{paradise:LIPIcs.ITCS.2020.2,
  author =	{Paradise, Orr},
  title =	{{Smooth and Strong PCPs}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{2:1--2:41},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.2},
  URN =		{urn:nbn:de:0030-drops-116875},
  doi =		{10.4230/LIPIcs.ITCS.2020.2},
  annote =	{Keywords: Interactive and probabilistic proof systems, Probabilistically checkable proofs, Hardness of approximation}
}
Document
RANDOM
Direct Sum Testing: The General Case

Authors: Irit Dinur and Konstantin Golubev

Published in: LIPIcs, Volume 145, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)


Abstract
A function f:[n_1] x ... x[n_d]->F_2 is a direct sum if it is of the form f (a_1,...,a_d) = f_1(a_1) oplus ... oplus f_d (a_d), for some d functions f_i:[n_i]->F_2 for all i=1,..., d, and where n_1,...,n_d in N. We present a 4-query test which distinguishes between direct sums and functions that are far from them. The test relies on the BLR linearity test (Blum, Luby, Rubinfeld, 1993) and on the direct product test constructed by Dinur & Steurer (2014). We also present a different test, which queries the function (d+1) times, but is easier to analyze. In multiplicative +/- 1 notation, this reads as follows. A d-dimensional tensor with +/- 1 entries is called a tensor product if it is a tensor product of d vectors with +/- 1 entries, or equivalently, if it is of rank 1. The presented tests can be read as tests for distinguishing between tensor products and tensors that are far from being tensor products.

Cite as

Irit Dinur and Konstantin Golubev. Direct Sum Testing: The General Case. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 145, pp. 40:1-40:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{dinur_et_al:LIPIcs.APPROX-RANDOM.2019.40,
  author =	{Dinur, Irit and Golubev, Konstantin},
  title =	{{Direct Sum Testing: The General Case}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)},
  pages =	{40:1--40:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-125-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{145},
  editor =	{Achlioptas, Dimitris and V\'{e}gh, L\'{a}szl\'{o} A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2019.40},
  URN =		{urn:nbn:de:0030-drops-112554},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2019.40},
  annote =	{Keywords: property testing, direct sum, tensor product}
}
Document
UG-Hardness to NP-Hardness by Losing Half

Authors: Amey Bhangale and Subhash Khot

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
The 2-to-2 Games Theorem of [Subhash Khot et al., 2017; Dinur et al., 2018; Dinur et al., 2018; Dinur et al., 2018] implies that it is NP-hard to distinguish between Unique Games instances with assignment satisfying at least (1/2-epsilon) fraction of the constraints vs. no assignment satisfying more than epsilon fraction of the constraints, for every constant epsilon>0. We show that the reduction can be transformed in a non-trivial way to give a stronger guarantee in the completeness case: For at least (1/2-epsilon) fraction of the vertices on one side, all the constraints associated with them in the Unique Games instance can be satisfied. We use this guarantee to convert the known UG-hardness results to NP-hardness. We show: 1) Tight inapproximability of approximating independent sets in degree d graphs within a factor of Omega(d/(log^2 d)), where d is a constant. 2) NP-hardness of approximate the Maximum Acyclic Subgraph problem within a factor of 2/3+epsilon, improving the previous ratio of 14/15+epsilon by Austrin et al. [Austrin et al., 2015]. 3) For any predicate P^{-1}(1) subseteq [q]^k supporting a balanced pairwise independent distribution, given a P-CSP instance with value at least 1/2-epsilon, it is NP-hard to satisfy more than (|P^{-1}(1)|/(q^k))+epsilon fraction of constraints.

Cite as

Amey Bhangale and Subhash Khot. UG-Hardness to NP-Hardness by Losing Half. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 3:1-3:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bhangale_et_al:LIPIcs.CCC.2019.3,
  author =	{Bhangale, Amey and Khot, Subhash},
  title =	{{UG-Hardness to NP-Hardness by Losing Half}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{3:1--3:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.3},
  URN =		{urn:nbn:de:0030-drops-108258},
  doi =		{10.4230/LIPIcs.CCC.2019.3},
  annote =	{Keywords: NP-hardness, Inapproximability, Unique Games Conjecture}
}
Document
From Local to Robust Testing via Agreement Testing

Authors: Irit Dinur, Prahladh Harsha, Tali Kaufman, and Noga Ron-Zewi

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
A local tester for an error-correcting code is a probabilistic procedure that queries a small subset of coordinates, accepts codewords with probability one, and rejects non-codewords with probability proportional to their distance from the code. The local tester is robust if for non-codewords it satisfies the stronger property that the average distance of local views from accepting views is proportional to the distance from the code. Robust testing is an important component in constructions of locally testable codes and probabilistically checkable proofs as it allows for composition of local tests. In this work we show that for certain codes, any (natural) local tester can be converted to a roubst tester with roughly the same number of queries. Our result holds for the class of affine-invariant lifted codes which is a broad class of codes that includes Reed-Muller codes, as well as recent constructions of high-rate locally testable codes (Guo, Kopparty, and Sudan, ITCS 2013). Instantiating this with known local testing results for lifted codes gives a more direct proof that improves some of the parameters of the main result of Guo, Haramaty, and Sudan (FOCS 2015), showing robustness of lifted codes. To obtain the above transformation we relate the notions of local testing and robust testing to the notion of agreement testing that attempts to find out whether valid partial assignments can be stitched together to a global codeword. We first show that agreement testing implies robust testing, and then show that local testing implies agreement testing. Our proof is combinatorial, and is based on expansion / sampling properties of the collection of local views of local testers. Thus, it immediately applies to local testers of lifted codes that query random affine subspaces in F_q^m, and moreover seems amenable to extension to other families of locally testable codes with expanding families of local views.

Cite as

Irit Dinur, Prahladh Harsha, Tali Kaufman, and Noga Ron-Zewi. From Local to Robust Testing via Agreement Testing. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 29:1-29:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{dinur_et_al:LIPIcs.ITCS.2019.29,
  author =	{Dinur, Irit and Harsha, Prahladh and Kaufman, Tali and Ron-Zewi, Noga},
  title =	{{From Local to Robust Testing via Agreement Testing}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{29:1--29:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.29},
  URN =		{urn:nbn:de:0030-drops-101221},
  doi =		{10.4230/LIPIcs.ITCS.2019.29},
  annote =	{Keywords: Local testing, Robust testing, Agreement testing, Affine-invariant codes, Lifted codes}
}
  • Refine by Author
  • 13 Dinur, Irit
  • 5 Harsha, Prahladh
  • 2 Bhangale, Amey
  • 2 C. S., Karthik
  • 2 Dikstein, Yotam
  • Show More...

  • Refine by Classification

  • Refine by Keyword
  • 3 PCP
  • 2 Hardness of Approximation
  • 2 Hardness of approximation
  • 2 High Dimensional Expanders
  • 2 Inapproximability
  • Show More...

  • Refine by Type
  • 26 document

  • Refine by Publication Year
  • 5 2018
  • 4 2019
  • 3 2017
  • 3 2020
  • 3 2021
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail