Document

Complete Volume

**Published in:** LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)

LIPIcs, Volume 287, ITCS 2024, Complete Volume

15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 1-2170, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)

Copy BibTex To Clipboard

@Proceedings{guruswami:LIPIcs.ITCS.2024, title = {{LIPIcs, Volume 287, ITCS 2024, Complete Volume}}, booktitle = {15th Innovations in Theoretical Computer Science Conference (ITCS 2024)}, pages = {1--2170}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-309-6}, ISSN = {1868-8969}, year = {2024}, volume = {287}, editor = {Guruswami, Venkatesan}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024}, URN = {urn:nbn:de:0030-drops-195274}, doi = {10.4230/LIPIcs.ITCS.2024}, annote = {Keywords: LIPIcs, Volume 287, ITCS 2024, Complete Volume} }

Document

Front Matter

**Published in:** LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)

Front Matter, Table of Contents, Preface, Conference Organization

15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 0:i-0:xxiv, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)

Copy BibTex To Clipboard

@InProceedings{guruswami:LIPIcs.ITCS.2024.0, author = {Guruswami, Venkatesan}, title = {{Front Matter, Table of Contents, Preface, Conference Organization}}, booktitle = {15th Innovations in Theoretical Computer Science Conference (ITCS 2024)}, pages = {0:i--0:xxiv}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-309-6}, ISSN = {1868-8969}, year = {2024}, volume = {287}, editor = {Guruswami, Venkatesan}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.0}, URN = {urn:nbn:de:0030-drops-195283}, doi = {10.4230/LIPIcs.ITCS.2024.0}, annote = {Keywords: Front Matter, Table of Contents, Preface, Conference Organization} }

Document

**Published in:** LIPIcs, Volume 284, 43rd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2023)

In recent years the framework of learning from label proportions (LLP) has been gaining importance in machine learning. In this setting, the training examples are aggregated into subsets or bags and only the average label per bag is available for learning an example-level predictor. This generalizes traditional PAC learning which is the special case of unit-sized bags. The computational learning aspects of LLP were studied in recent works [R. Saket, 2021; R. Saket, 2022] which showed algorithms and hardness for learning halfspaces in the LLP setting. In this work we focus on the intractability of LLP learning Boolean functions. Our first result shows that given a collection of bags of size at most 2 which are consistent with an OR function, it is NP-hard to find a CNF of constantly many clauses which satisfies any constant-fraction of the bags. This is in contrast with the work of [R. Saket, 2021] which gave a (2/5)-approximation for learning ORs using a halfspace. Thus, our result provides a separation between constant clause CNFs and halfspaces as hypotheses for LLP learning ORs.
Next, we prove the hardness of satisfying more than 1/2 + o(1) fraction of such bags using a t-DNF (i.e. DNF where each term has ≤ t literals) for any constant t. In usual PAC learning such a hardness was known [S. Khot and R. Saket, 2008] only for learning noisy ORs. We also study the learnability of parities and show that it is NP-hard to satisfy more than (q/2^{q-1} + o(1))-fraction of q-sized bags which are consistent with a parity using a parity, while a random parity based algorithm achieves a (1/2^{q-2})-approximation.

Venkatesan Guruswami and Rishi Saket. Hardness of Learning Boolean Functions from Label Proportions. In 43rd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 284, pp. 37:1-37:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.FSTTCS.2023.37, author = {Guruswami, Venkatesan and Saket, Rishi}, title = {{Hardness of Learning Boolean Functions from Label Proportions}}, booktitle = {43rd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2023)}, pages = {37:1--37:15}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-304-1}, ISSN = {1868-8969}, year = {2023}, volume = {284}, editor = {Bouyer, Patricia and Srinivasan, Srikanth}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2023.37}, URN = {urn:nbn:de:0030-drops-194106}, doi = {10.4230/LIPIcs.FSTTCS.2023.37}, annote = {Keywords: Learning from label proportions, Computational learning, Hardness, Boolean functions} }

Document

RANDOM

**Published in:** LIPIcs, Volume 275, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)

We present an explicit construction of a sequence of rate 1/2 Wozencraft ensemble codes (over any fixed finite field 𝔽_q) that achieve minimum distance Ω(√k) where k is the message length. The coefficients of the Wozencraft ensemble codes are constructed using Sidon Sets and the cyclic structure of 𝔽_{q^k} where k+1 is prime with q a primitive root modulo k+1. Assuming Artin’s conjecture, there are infinitely many such k for any prime power q.

Venkatesan Guruswami and Shilun Li. A Deterministic Construction of a Large Distance Code from the Wozencraft Ensemble. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 275, pp. 50:1-50:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.APPROX/RANDOM.2023.50, author = {Guruswami, Venkatesan and Li, Shilun}, title = {{A Deterministic Construction of a Large Distance Code from the Wozencraft Ensemble}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)}, pages = {50:1--50:10}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-296-9}, ISSN = {1868-8969}, year = {2023}, volume = {275}, editor = {Megow, Nicole and Smith, Adam}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2023.50}, URN = {urn:nbn:de:0030-drops-188751}, doi = {10.4230/LIPIcs.APPROX/RANDOM.2023.50}, annote = {Keywords: Algebraic codes, Pseudorandomness, Explicit Construction, Wozencraft Ensemble, Sidon Sets} }

Document

Complete Volume

**Published in:** LIPIcs, Volume 250, 42nd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2022)

LIPIcs, Volume 250, FSTTCS 2022, Complete Volume

42nd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 250, pp. 1-792, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)

Copy BibTex To Clipboard

@Proceedings{dawar_et_al:LIPIcs.FSTTCS.2022, title = {{LIPIcs, Volume 250, FSTTCS 2022, Complete Volume}}, booktitle = {42nd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2022)}, pages = {1--792}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-261-7}, ISSN = {1868-8969}, year = {2022}, volume = {250}, editor = {Dawar, Anuj and Guruswami, Venkatesan}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2022}, URN = {urn:nbn:de:0030-drops-173910}, doi = {10.4230/LIPIcs.FSTTCS.2022}, annote = {Keywords: LIPIcs, Volume 250, FSTTCS 2022, Complete Volume} }

Document

Front Matter

**Published in:** LIPIcs, Volume 250, 42nd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2022)

Front Matter, Table of Contents, Preface, Conference Organization

42nd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 250, pp. 0:i-0:xvi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)

Copy BibTex To Clipboard

@InProceedings{dawar_et_al:LIPIcs.FSTTCS.2022.0, author = {Dawar, Anuj and Guruswami, Venkatesan}, title = {{Front Matter, Table of Contents, Preface, Conference Organization}}, booktitle = {42nd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2022)}, pages = {0:i--0:xvi}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-261-7}, ISSN = {1868-8969}, year = {2022}, volume = {250}, editor = {Dawar, Anuj and Guruswami, Venkatesan}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2022.0}, URN = {urn:nbn:de:0030-drops-173928}, doi = {10.4230/LIPIcs.FSTTCS.2022.0}, annote = {Keywords: Front Matter, Table of Contents, Preface, Conference Organization} }

Document

**Published in:** Dagstuhl Reports, Volume 12, Issue 5 (2022)

Constraint satisfaction has always played a central role in computational complexity theory; appropriate versions of CSPs are classical complete problems for most standard complexity classes. CSPs constitute a very rich and yet sufficiently manageable class of problems to give a good perspective on general computational phenomena. For instance, they help to understand which mathematical properties make a computational problem tractable (in a wide sense, e.g., polynomial-time solvable, non-trivially approximable, fixed-parameter tractable, or definable in a weak logic). In the last 15 years, research activity in this area has significantly intensified and hugely impressive progress was made. The Dagstuhl Seminar 22201 "The Constraint Satisfaction Problem: Complexity and Approximability" was aimed at bringing together researchers using all the different techniques in the study of the CSP so that they can share their insights obtained during the past four years. This report documents the material presented during the course of the seminar.

Martin Grohe, Venkatesan Guruswami, Dániel Marx, and Stanislav Živný. The Constraint Satisfaction Problem: Complexity and Approximability (Dagstuhl Seminar 22201). In Dagstuhl Reports, Volume 12, Issue 5, pp. 112-130, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)

Copy BibTex To Clipboard

@Article{grohe_et_al:DagRep.12.5.112, author = {Grohe, Martin and Guruswami, Venkatesan and Marx, D\'{a}niel and \v{Z}ivn\'{y}, Stanislav}, title = {{The Constraint Satisfaction Problem: Complexity and Approximability (Dagstuhl Seminar 22201)}}, pages = {112--130}, journal = {Dagstuhl Reports}, ISSN = {2192-5283}, year = {2022}, volume = {12}, number = {5}, editor = {Grohe, Martin and Guruswami, Venkatesan and Marx, D\'{a}niel and \v{Z}ivn\'{y}, Stanislav}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/DagRep.12.5.112}, URN = {urn:nbn:de:0030-drops-174453}, doi = {10.4230/DagRep.12.5.112}, annote = {Keywords: Constraint satisfaction problem (CSP); Computational complexity; Hardness of approximation; Universal algebra; Semidefinite programming} }

Document

RANDOM

**Published in:** LIPIcs, Volume 245, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)

We consider the problem of designing low-redundancy codes in settings where one must correct deletions in conjunction with substitutions or adjacent transpositions; a combination of errors that is usually observed in DNA-based data storage. One of the most basic versions of this problem was settled more than 50 years ago by Levenshtein, who proved that binary Varshamov-Tenengolts codes correct one arbitrary edit error, i.e., one deletion or one substitution, with nearly optimal redundancy. However, this approach fails to extend to many simple and natural variations of the binary single-edit error setting. In this work, we make progress on the code design problem above in three such variations:
- We construct linear-time encodable and decodable length-n non-binary codes correcting a single edit error with nearly optimal redundancy log n+O(log log n), providing an alternative simpler proof of a result by Cai, Chee, Gabrys, Kiah, and Nguyen (IEEE Trans. Inf. Theory 2021). This is achieved by employing what we call weighted VT sketches, a new notion that may be of independent interest.
- We show the existence of a binary code correcting one deletion or one adjacent transposition with nearly optimal redundancy log n+O(log log n).
- We construct linear-time encodable and list-decodable binary codes with list-size 2 for one deletion and one substitution with redundancy 4log n+O(log log n). This matches the existential bound up to an O(log log n) additive term.

Ryan Gabrys, Venkatesan Guruswami, João Ribeiro, and Ke Wu. Beyond Single-Deletion Correcting Codes: Substitutions and Transpositions. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 245, pp. 8:1-8:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)

Copy BibTex To Clipboard

@InProceedings{gabrys_et_al:LIPIcs.APPROX/RANDOM.2022.8, author = {Gabrys, Ryan and Guruswami, Venkatesan and Ribeiro, Jo\~{a}o and Wu, Ke}, title = {{Beyond Single-Deletion Correcting Codes: Substitutions and Transpositions}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)}, pages = {8:1--8:17}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-249-5}, ISSN = {1868-8969}, year = {2022}, volume = {245}, editor = {Chakrabarti, Amit and Swamy, Chaitanya}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2022.8}, URN = {urn:nbn:de:0030-drops-171302}, doi = {10.4230/LIPIcs.APPROX/RANDOM.2022.8}, annote = {Keywords: Synchronization errors, Optimal redundancy, Explicit codes} }

Document

RANDOM

**Published in:** LIPIcs, Volume 245, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)

Polarization is an unprecedented coding technique in that it not only achieves channel capacity, but also does so at a faster speed of convergence than any other technique. This speed is measured by the "scaling exponent" and its importance is three-fold. Firstly, estimating the scaling exponent is challenging and demands a deeper understanding of the dynamics of communication channels. Secondly, scaling exponents serve as a benchmark for different variants of polar codes that helps us select the proper variant for real-life applications. Thirdly, the need to optimize for the scaling exponent sheds light on how to reinforce the design of polar code.
In this paper, we generalize the binary erasure channel (BEC), the simplest communication channel and the protagonist of many polar code studies, to the "tetrahedral erasure channel" (TEC). We then invoke Mori-Tanaka’s 2 × 2 matrix over 𝔽_4 to construct polar codes over TEC. Our main contribution is showing that the dynamic of TECs converges to an almost-one-parameter family of channels, which then leads to an upper bound of 3.328 on the scaling exponent. This is the first non-binary matrix whose scaling exponent is upper-bounded. It also polarizes BEC faster than all known binary matrices up to 23 × 23 in size. Our result indicates that expanding the alphabet is a more effective and practical alternative to enlarging the matrix in order to achieve faster polarization.

Iwan Duursma, Ryan Gabrys, Venkatesan Guruswami, Ting-Chun Lin, and Hsin-Po Wang. Accelerating Polarization via Alphabet Extension. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 245, pp. 17:1-17:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)

Copy BibTex To Clipboard

@InProceedings{duursma_et_al:LIPIcs.APPROX/RANDOM.2022.17, author = {Duursma, Iwan and Gabrys, Ryan and Guruswami, Venkatesan and Lin, Ting-Chun and Wang, Hsin-Po}, title = {{Accelerating Polarization via Alphabet Extension}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)}, pages = {17:1--17:15}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-249-5}, ISSN = {1868-8969}, year = {2022}, volume = {245}, editor = {Chakrabarti, Amit and Swamy, Chaitanya}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2022.17}, URN = {urn:nbn:de:0030-drops-171393}, doi = {10.4230/LIPIcs.APPROX/RANDOM.2022.17}, annote = {Keywords: polar code, scaling exponent} }

Document

RANDOM

**Published in:** LIPIcs, Volume 245, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)

In the range avoidance problem, the input is a multi-output Boolean circuit with more outputs than inputs, and the goal is to find a string outside its range (which is guaranteed to exist). We show that well-known explicit construction questions such as finding binary linear codes achieving the Gilbert-Varshamov bound or list-decoding capacity, and constructing rigid matrices, reduce to the range avoidance problem of log-depth circuits, and by a further recent reduction [Ren, Santhanam, and Wang, FOCS 2022] to NC⁰₄ circuits where each output depends on at most 4 input bits.
On the algorithmic side, we show that range avoidance for NC⁰₂ circuits can be solved in polynomial time. We identify a general condition relating to correlation with low-degree parities that implies that any almost pairwise independent set has some string that avoids the range of every circuit in the class. We apply this to NC⁰ circuits, and to small width CNF/DNF and general De Morgan formulae (via a connection to approximate-degree), yielding non-trivial small hitting sets for range avoidance in these cases.

Venkatesan Guruswami, Xin Lyu, and Xiuhan Wang. Range Avoidance for Low-Depth Circuits and Connections to Pseudorandomness. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 245, pp. 20:1-20:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.APPROX/RANDOM.2022.20, author = {Guruswami, Venkatesan and Lyu, Xin and Wang, Xiuhan}, title = {{Range Avoidance for Low-Depth Circuits and Connections to Pseudorandomness}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)}, pages = {20:1--20:21}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-249-5}, ISSN = {1868-8969}, year = {2022}, volume = {245}, editor = {Chakrabarti, Amit and Swamy, Chaitanya}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2022.20}, URN = {urn:nbn:de:0030-drops-171428}, doi = {10.4230/LIPIcs.APPROX/RANDOM.2022.20}, annote = {Keywords: Pseudorandomness, Explicit constructions, Low-depth circuits, Boolean function analysis, Hitting sets} }

Document

APPROX

**Published in:** LIPIcs, Volume 245, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)

Let H(k,n,p) be the distribution on k-uniform hypergraphs where every subset of [n] of size k is included as an hyperedge with probability p independently. In this work, we design and analyze a simple spectral algorithm that certifies a bound on the size of the largest clique, ω(H), in hypergraphs H ∼ H(k,n,p). For example, for any constant p, with high probability over the choice of the hypergraph, our spectral algorithm certifies a bound of Õ(√n) on the clique number in polynomial time. This matches, up to polylog(n) factors, the best known certificate for the clique number in random graphs, which is the special case of k = 2.
Prior to our work, the best known refutation algorithms [Amin Coja-Oghlan et al., 2004; Sarah R. Allen et al., 2015] rely on a reduction to the problem of refuting random k-XOR via Feige’s XOR trick [Uriel Feige, 2002], and yield a polynomially worse bound of Õ(n^{3/4}) on the clique number when p = O(1). Our algorithm bypasses the XOR trick and relies instead on a natural generalization of the Lovász theta semidefinite programming relaxation for cliques in hypergraphs.

Venkatesan Guruswami, Pravesh K. Kothari, and Peter Manohar. Bypassing the XOR Trick: Stronger Certificates for Hypergraph Clique Number. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 245, pp. 42:1-42:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.APPROX/RANDOM.2022.42, author = {Guruswami, Venkatesan and Kothari, Pravesh K. and Manohar, Peter}, title = {{Bypassing the XOR Trick: Stronger Certificates for Hypergraph Clique Number}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)}, pages = {42:1--42:7}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-249-5}, ISSN = {1868-8969}, year = {2022}, volume = {245}, editor = {Chakrabarti, Amit and Swamy, Chaitanya}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2022.42}, URN = {urn:nbn:de:0030-drops-171642}, doi = {10.4230/LIPIcs.APPROX/RANDOM.2022.42}, annote = {Keywords: Planted clique, Average-case complexity, Spectral refutation, Random matrix theory} }

Document

**Published in:** LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)

Random subspaces X of ℝⁿ of dimension proportional to n are, with high probability, well-spread with respect to the 𝓁₂-norm. Namely, every nonzero x ∈ X is "robustly non-sparse" in the following sense: x is ε ‖x‖₂-far in 𝓁₂-distance from all δ n-sparse vectors, for positive constants ε, δ bounded away from 0. This "𝓁₂-spread" property is the natural counterpart, for subspaces over the reals, of the minimum distance of linear codes over finite fields, and corresponds to X being a Euclidean section of the 𝓁₁ unit ball. Explicit 𝓁₂-spread subspaces of dimension Ω(n), however, are unknown, and the best known explicit constructions (which achieve weaker spread properties), are analogs of low density parity check (LDPC) codes over the reals, i.e., they are kernels of certain sparse matrices.
Motivated by this, we study the spread properties of the kernels of sparse random matrices. We prove that with high probability such subspaces contain vectors x that are o(1)⋅‖x‖₂-close to o(n)-sparse with respect to the 𝓁₂-norm, and in particular are not 𝓁₂-spread. This is strikingly different from the case of random LDPC codes, whose distance is asymptotically almost as good as that of (dense) random linear codes.
On the other hand, for p < 2 we prove that such subspaces are 𝓁_p-spread with high probability. The spread property of sparse random matrices thus exhibits a threshold behavior at p = 2. Our proof for p < 2 moreover shows that a random sparse matrix has the stronger restricted isometry property (RIP) with respect to the 𝓁_p norm, and in fact this follows solely from the unique expansion of a random biregular graph, yielding a somewhat unexpected generalization of a similar result for the 𝓁₁ norm [Berinde et al., 2008]. Instantiating this with suitable explicit expanders, we obtain the first explicit constructions of 𝓁_p-RIP matrices for 1 ≤ p < p₀, where 1 < p₀ < 2 is an absolute constant.

Venkatesan Guruswami, Peter Manohar, and Jonathan Mosheiff. 𝓁_p-Spread and Restricted Isometry Properties of Sparse Random Matrices. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 7:1-7:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.CCC.2022.7, author = {Guruswami, Venkatesan and Manohar, Peter and Mosheiff, Jonathan}, title = {{𝓁\underlinep-Spread and Restricted Isometry Properties of Sparse Random Matrices}}, booktitle = {37th Computational Complexity Conference (CCC 2022)}, pages = {7:1--7:17}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-241-9}, ISSN = {1868-8969}, year = {2022}, volume = {234}, editor = {Lovett, Shachar}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.7}, URN = {urn:nbn:de:0030-drops-165695}, doi = {10.4230/LIPIcs.CCC.2022.7}, annote = {Keywords: Spread Subspaces, Euclidean Sections, Restricted Isometry Property, Sparse Matrices} }

Document

**Published in:** LIPIcs, Volume 214, 16th International Symposium on Parameterized and Exact Computation (IPEC 2021)

We introduce the problem of finding a satisfying assignment to a CNF formula that must further belong to a prescribed input subspace. Equivalent formulations of the problem include finding a point outside a union of subspaces (the Union-of-Subspace Avoidance (USA) problem), and finding a common zero of a system of polynomials over 𝔽₂ each of which is a product of affine forms.
We focus on the case of k-CNF formulas (the k-Sub-Sat problem). Clearly, k-Sub-Sat is no easier than k-SAT, and might be harder. Indeed, via simple reductions we show that 2-Sub-Sat is NP-hard, and W[1]-hard when parameterized by the co-dimension of the subspace. We also prove that the optimization version Max-2-Sub-Sat is NP-hard to approximate better than the trivial 3/4 ratio even on satisfiable instances.
On the algorithmic front, we investigate fast exponential algorithms which give non-trivial savings over brute-force algorithms. We give a simple branching algorithm with running time (1.5)^r for 2-Sub-Sat, where r is the subspace dimension, as well as an O^*(1.4312)ⁿ time algorithm where n is the number of variables.
Turning to k-Sub-Sat for k ⩾ 3, while known algorithms for solving a system of degree k polynomial equations already imply a solution with running time ≈ 2^{r(1-1/2k)}, we explore a more combinatorial approach. Based on an analysis of critical variables (a key notion underlying the randomized k-SAT algorithm of Paturi, Pudlak, and Zane), we give an algorithm with running time ≈ {n choose {⩽t}} 2^{n-n/k} where n is the number of variables and t is the co-dimension of the subspace. This improves upon the running time of the polynomial equations approach for small co-dimension. Our combinatorial approach also achieves polynomial space in contrast to the algebraic approach that uses exponential space. We also give a PPZ-style algorithm for k-Sub-Sat with running time ≈ 2^{n-n/2k}. This algorithm is in fact oblivious to the structure of the subspace, and extends when the subspace-membership constraint is replaced by any constraint for which partial satisfying assignments can be efficiently completed to a full satisfying assignment. Finally, for systems of O(n) polynomial equations in n variables over 𝔽₂, we give a fast exponential algorithm when each polynomial has bounded degree irreducible factors (but can otherwise have large degree) using a degree reduction trick.

Vikraman Arvind and Venkatesan Guruswami. CNF Satisfiability in a Subspace and Related Problems. In 16th International Symposium on Parameterized and Exact Computation (IPEC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 214, pp. 5:1-5:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)

Copy BibTex To Clipboard

@InProceedings{arvind_et_al:LIPIcs.IPEC.2021.5, author = {Arvind, Vikraman and Guruswami, Venkatesan}, title = {{CNF Satisfiability in a Subspace and Related Problems}}, booktitle = {16th International Symposium on Parameterized and Exact Computation (IPEC 2021)}, pages = {5:1--5:15}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-216-7}, ISSN = {1868-8969}, year = {2021}, volume = {214}, editor = {Golovach, Petr A. and Zehavi, Meirav}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2021.5}, URN = {urn:nbn:de:0030-drops-153886}, doi = {10.4230/LIPIcs.IPEC.2021.5}, annote = {Keywords: CNF Satisfiability, Exact exponential algorithms, Hardness results} }

Document

RANDOM

**Published in:** LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)

We propose a framework to study the effect of local recovery requirements of codeword symbols on the dimension of linear codes, based on a combinatorial proxy that we call visible rank. The locality constraints of a linear code are stipulated by a matrix H of ⋆’s and 0’s (which we call a "stencil"), whose rows correspond to the local parity checks (with the ⋆’s indicating the support of the check). The visible rank of H is the largest r for which there is a r × r submatrix in H with a unique generalized diagonal of ⋆’s. The visible rank yields a field-independent combinatorial lower bound on the rank of H and thus the co-dimension of the code.
We point out connections of the visible rank to other notions in the literature such as unique restricted graph matchings, matroids, spanoids, and min-rank. In particular, we prove a rank-nullity type theorem relating visible rank to the rank of an associated construct called symmetric spanoid, which was introduced by Dvir, Gopi, Gu, and Wigderson [Zeev Dvir et al., 2020]. Using this connection and a construction of appropriate stencils, we answer a question posed in [Zeev Dvir et al., 2020] and demonstrate that symmetric spanoid rank cannot improve the currently best known Õ(n^{(q-2)/(q-1)}) upper bound on the dimension of q-query locally correctable codes (LCCs) of length n. This also pins down the efficacy of visible rank as a proxy for the dimension of LCCs.
We also study the t-Disjoint Repair Group Property (t-DRGP) of codes where each codeword symbol must belong to t disjoint check equations. It is known that linear codes with 2-DRGP must have co-dimension Ω(√n) (which is matched by a simple product code construction). We show that there are stencils corresponding to 2-DRGP with visible rank as small as O(log n). However, we show the second tensor of any 2-DRGP stencil has visible rank Ω(n), thus recovering the Ω(√n) lower bound for 2-DRGP. For q-LCC, however, the k'th tensor power for k ⩽ n^{o(1)} is unable to improve the Õ(n^{(q-2)/(q-1)}) upper bound on the dimension of q-LCCs by a polynomial factor.Inspired by this and as a notion of intrinsic interest, we define the notion of visible capacity of a stencil as the limiting visible rank of high tensor powers, analogous to Shannon capacity, and pose the question whether there can be large gaps between visible capacity and algebraic rank.

Omar Alrabiah and Venkatesan Guruswami. Visible Rank and Codes with Locality. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 57:1-57:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)

Copy BibTex To Clipboard

@InProceedings{alrabiah_et_al:LIPIcs.APPROX/RANDOM.2021.57, author = {Alrabiah, Omar and Guruswami, Venkatesan}, title = {{Visible Rank and Codes with Locality}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)}, pages = {57:1--57:18}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-207-5}, ISSN = {1868-8969}, year = {2021}, volume = {207}, editor = {Wootters, Mary and Sanit\`{a}, Laura}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.57}, URN = {urn:nbn:de:0030-drops-147502}, doi = {10.4230/LIPIcs.APPROX/RANDOM.2021.57}, annote = {Keywords: Visible Rank, Stencils, Locality, DRGP Codes, Locally Correctable Codes} }

Document

Track A: Algorithms, Complexity and Games

**Published in:** LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)

Promise Constraint Satisfaction Problems (PCSPs) are a generalization of Constraint Satisfaction Problems (CSPs) where each predicate has a strong and a weak form and given a CSP instance, the objective is to distinguish if the strong form can be satisfied vs. even the weak form cannot be satisfied. Since their formal introduction by Austrin, Guruswami, and Håstad [Per Austrin et al., 2017], there has been a flurry of works on PCSPs, including recent breakthroughs in approximate graph coloring [Barto et al., 2018; Andrei A. Krokhin and Jakub Opršal, 2019; Marcin Wrochna and Stanislav Zivný, 2020]. The key tool in studying PCSPs is the algebraic framework developed in the context of CSPs where the closure properties of the satisfying solutions known as polymorphisms are analyzed.
The polymorphisms of PCSPs are significantly richer than CSPs - even in the Boolean case, we still do not know if there exists a dichotomy result for PCSPs analogous to Schaefer’s dichotomy result [Thomas J. Schaefer, 1978] for CSPs. In this paper, we study a special case of Boolean PCSPs, namely Boolean Ordered PCSPs where the Boolean PCSPs have the predicate x ≤ y. In the algebraic framework, this is the special case of Boolean PCSPs when the polymorphisms are monotone functions. We prove that Boolean Ordered PCSPs exhibit a computational dichotomy assuming the Rich 2-to-1 Conjecture [Mark Braverman et al., 2021] which is a perfect completeness surrogate of the Unique Games Conjecture.
In particular, assuming the Rich 2-to-1 Conjecture, we prove that a Boolean Ordered PCSP can be solved in polynomial time if for every ε > 0, it has polymorphisms where each coordinate has Shapley value at most ε, else it is NP-hard. The algorithmic part of our dichotomy result is based on a structural lemma showing that Boolean monotone functions with each coordinate having low Shapley value have arbitrarily large threshold functions as minors. The hardness part proceeds by showing that the Shapley value is consistent under a uniformly random 2-to-1 minor. As a structural result of independent interest, we construct an example to show that the Shapley value can be inconsistent under an adversarial 2-to-1 minor.

Joshua Brakensiek, Venkatesan Guruswami, and Sai Sandeep. Conditional Dichotomy of Boolean Ordered Promise CSPs. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 37:1-37:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)

Copy BibTex To Clipboard

@InProceedings{brakensiek_et_al:LIPIcs.ICALP.2021.37, author = {Brakensiek, Joshua and Guruswami, Venkatesan and Sandeep, Sai}, title = {{Conditional Dichotomy of Boolean Ordered Promise CSPs}}, booktitle = {48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)}, pages = {37:1--37:15}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-195-5}, ISSN = {1868-8969}, year = {2021}, volume = {198}, editor = {Bansal, Nikhil and Merelli, Emanuela and Worrell, James}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.37}, URN = {urn:nbn:de:0030-drops-141060}, doi = {10.4230/LIPIcs.ICALP.2021.37}, annote = {Keywords: promise constraint satisfaction, Boolean ordered PCSP, Shapley value, rich 2-to-1 conjecture, random minor} }

Document

**Published in:** LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)

Suppose that 𝒫 is a property that may be satisfied by a random code C ⊂ Σⁿ. For example, for some p ∈ (0,1), 𝒫 might be the property that there exist three elements of C that lie in some Hamming ball of radius pn. We say that R^* is the threshold rate for 𝒫 if a random code of rate R^* + ε is very likely to satisfy 𝒫, while a random code of rate R^* - ε is very unlikely to satisfy 𝒫. While random codes are well-studied in coding theory, even the threshold rates for relatively simple properties like the one above are not well understood.
We characterize threshold rates for a rich class of properties. These properties, like the example above, are defined by the inclusion of specific sets of codewords which are also suitably "symmetric." For properties in this class, we show that the threshold rate is in fact equal to the lower bound that a simple first-moment calculation obtains. Our techniques not only pin down the threshold rate for the property 𝒫 above, they give sharp bounds on the threshold rate for list-recovery in several parameter regimes, as well as an efficient algorithm for estimating the threshold rates for list-recovery in general.

Venkatesan Guruswami, Jonathan Mosheiff, Nicolas Resch, Shashwat Silas, and Mary Wootters. Sharp Threshold Rates for Random Codes. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 5:1-5:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.ITCS.2021.5, author = {Guruswami, Venkatesan and Mosheiff, Jonathan and Resch, Nicolas and Silas, Shashwat and Wootters, Mary}, title = {{Sharp Threshold Rates for Random Codes}}, booktitle = {12th Innovations in Theoretical Computer Science Conference (ITCS 2021)}, pages = {5:1--5:20}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-177-1}, ISSN = {1868-8969}, year = {2021}, volume = {185}, editor = {Lee, James R.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.5}, URN = {urn:nbn:de:0030-drops-135446}, doi = {10.4230/LIPIcs.ITCS.2021.5}, annote = {Keywords: Coding theory, Random codes, Sharp thresholds} }

Document

**Published in:** LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)

Random walks on expanders are a central and versatile tool in pseudorandomness. If an arbitrary half of the vertices of an expander graph are marked, known Chernoff bounds for expander walks imply that the number M of marked vertices visited in a long n-step random walk strongly concentrates around the expected n/2 value. Surprisingly, it was recently shown that the parity of M also has exponentially small bias.
Is there a common unification of these results? What other statistics about M resemble the binomial distribution (the Hamming weight of a random n-bit string)? To gain insight into such questions, we analyze a simpler model called the sticky random walk. This model is a natural stepping stone towards understanding expander random walks, and we also show that it is a necessary step. The sticky random walk starts with a random bit and then each subsequent bit independently equals the previous bit with probability (1+λ)/2. Here λ is the proxy for the expander’s (second largest) eigenvalue.
Using Krawtchouk expansion of functions, we derive several probabilistic results about the sticky random walk. We show an asymptotically tight Θ(λ) bound on the total variation distance between the (Hamming weight of the) sticky walk and the binomial distribution. We prove that the correlation between the majority and parity bit of the sticky walk is bounded by O(n^{-1/4}). This lends hope to unifying Chernoff bounds and parity concentration, as well as establishing other interesting statistical properties, of expander random walks.

Venkatesan Guruswami and Vinayak M. Kumar. Pseudobinomiality of the Sticky Random Walk. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 48:1-48:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.ITCS.2021.48, author = {Guruswami, Venkatesan and Kumar, Vinayak M.}, title = {{Pseudobinomiality of the Sticky Random Walk}}, booktitle = {12th Innovations in Theoretical Computer Science Conference (ITCS 2021)}, pages = {48:1--48:19}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-177-1}, ISSN = {1868-8969}, year = {2021}, volume = {185}, editor = {Lee, James R.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.48}, URN = {urn:nbn:de:0030-drops-135870}, doi = {10.4230/LIPIcs.ITCS.2021.48}, annote = {Keywords: Expander Graphs, Fourier analysis, Markov Chains, Pseudorandomness, Random Walks} }

Document

RANDOM

**Published in:** LIPIcs, Volume 176, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)

A family of error-correcting codes is list-decodable from error fraction p if, for every code in the family, the number of codewords in any Hamming ball of fractional radius p is less than some integer L that is independent of the code length. It is said to be list-recoverable for input list size 𝓁 if for every sufficiently large subset of codewords (of size L or more), there is a coordinate where the codewords take more than 𝓁 values. The parameter L is said to be the "list size" in either case. The capacity, i.e., the largest possible rate for these notions as the list size L → ∞, is known to be 1-h_q(p) for list-decoding, and 1-log_q 𝓁 for list-recovery, where q is the alphabet size of the code family.
In this work, we study the list size of random linear codes for both list-decoding and list-recovery as the rate approaches capacity. We show the following claims hold with high probability over the choice of the code (below q is the alphabet size, and ε > 0 is the gap to capacity).
- A random linear code of rate 1 - log_q(𝓁) - ε requires list size L ≥ 𝓁^{Ω(1/ε)} for list-recovery from input list size 𝓁. This is surprisingly in contrast to completely random codes, where L = O(𝓁/ε) suffices w.h.p.
- A random linear code of rate 1 - h_q(p) - ε requires list size L ≥ ⌊ {h_q(p)/ε+0.99}⌋ for list-decoding from error fraction p, when ε is sufficiently small.
- A random binary linear code of rate 1 - h₂(p) - ε is list-decodable from average error fraction p with list size with L ≤ ⌊ {h₂(p)/ε}⌋ + 2. (The average error version measures the average Hamming distance of the codewords from the center of the Hamming ball, instead of the maximum distance as in list-decoding.)
The second and third results together precisely pin down the list sizes for binary random linear codes for both list-decoding and average-radius list-decoding to three possible values.
Our lower bounds follow by exhibiting an explicit subset of codewords so that this subset - or some symbol-wise permutation of it - lies in a random linear code with high probability. This uses a recent characterization of (Mosheiff, Resch, Ron-Zewi, Silas, Wootters, 2019) of configurations of codewords that are contained in random linear codes. Our upper bound follows from a refinement of the techniques of (Guruswami, Håstad, Sudan, Zuckerman, 2002) and strengthens a previous result of (Li, Wootters, 2018), which applied to list-decoding rather than average-radius list-decoding.

Venkatesan Guruswami, Ray Li, Jonathan Mosheiff, Nicolas Resch, Shashwat Silas, and Mary Wootters. Bounds for List-Decoding and List-Recovery of Random Linear Codes. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 176, pp. 9:1-9:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.APPROX/RANDOM.2020.9, author = {Guruswami, Venkatesan and Li, Ray and Mosheiff, Jonathan and Resch, Nicolas and Silas, Shashwat and Wootters, Mary}, title = {{Bounds for List-Decoding and List-Recovery of Random Linear Codes}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)}, pages = {9:1--9:21}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-164-1}, ISSN = {1868-8969}, year = {2020}, volume = {176}, editor = {Byrka, Jaros{\l}aw and Meka, Raghu}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2020.9}, URN = {urn:nbn:de:0030-drops-126126}, doi = {10.4230/LIPIcs.APPROX/RANDOM.2020.9}, annote = {Keywords: list-decoding, list-recovery, random linear codes, coding theory} }

Document

APPROX

**Published in:** LIPIcs, Volume 176, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)

Dinur’s celebrated proof of the PCP theorem alternates two main steps in several iterations: gap amplification to increase the soundness gap by a large constant factor (at the expense of much larger alphabet size), and a composition step that brings back the alphabet size to an absolute constant (at the expense of a fixed constant factor loss in the soundness gap). We note that the gap amplification can produce a Label Cover CSP. This allows us to reduce the alphabet size via a direct long-code based reduction from Label Cover to a Boolean CSP. Our composition step thus bypasses the concept of Assignment Testers from Dinur’s proof, and we believe it is more intuitive - it is just a gadget reduction. The analysis also uses only elementary facts (Parseval’s identity) about Fourier Transforms over the hypercube.

Venkatesan Guruswami, Jakub Opršal, and Sai Sandeep. Revisiting Alphabet Reduction in Dinur’s PCP. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 176, pp. 34:1-34:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.APPROX/RANDOM.2020.34, author = {Guruswami, Venkatesan and Opr\v{s}al, Jakub and Sandeep, Sai}, title = {{Revisiting Alphabet Reduction in Dinur’s PCP}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)}, pages = {34:1--34:14}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-164-1}, ISSN = {1868-8969}, year = {2020}, volume = {176}, editor = {Byrka, Jaros{\l}aw and Meka, Raghu}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2020.34}, URN = {urn:nbn:de:0030-drops-126372}, doi = {10.4230/LIPIcs.APPROX/RANDOM.2020.34}, annote = {Keywords: PCP theorem, CSP, discrete Fourier analysis, label cover, long code} }

Document

Track A: Algorithms, Complexity and Games

**Published in:** LIPIcs, Volume 168, 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)

The d-to-1 conjecture of Khot asserts that it is NP-hard to satisfy an ε fraction of constraints of a satisfiable d-to-1 Label Cover instance, for arbitrarily small ε > 0. We prove that the d-to-1 conjecture for any fixed d implies the hardness of coloring a 3-colorable graph with C colors for arbitrarily large integers C.
Earlier, the hardness of O(1)-coloring a 4-colorable graphs is known under the 2-to-1 conjecture, which is the strongest in the family of d-to-1 conjectures, and the hardness for 3-colorable graphs is known under a certain "fish-shaped" variant of the 2-to-1 conjecture.

Venkatesan Guruswami and Sai Sandeep. d-To-1 Hardness of Coloring 3-Colorable Graphs with O(1) Colors. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 62:1-62:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.ICALP.2020.62, author = {Guruswami, Venkatesan and Sandeep, Sai}, title = {{d-To-1 Hardness of Coloring 3-Colorable Graphs with O(1) Colors}}, booktitle = {47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)}, pages = {62:1--62:12}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-138-2}, ISSN = {1868-8969}, year = {2020}, volume = {168}, editor = {Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.62}, URN = {urn:nbn:de:0030-drops-124694}, doi = {10.4230/LIPIcs.ICALP.2020.62}, annote = {Keywords: graph coloring, hardness of approximation} }

Document

**Published in:** LIPIcs, Volume 163, 1st Conference on Information-Theoretic Cryptography (ITC 2020)

Leakage-resilient secret sharing has mostly been studied in the compartmentalized models, where a leakage oracle can arbitrarily leak bounded number of bits from all shares, provided that the oracle only has access to a bounded number of shares when the leakage is taking place. We start a systematic study of leakage-resilient secret sharing against global leakage, where the leakage oracle can access the full set of shares simultaneously, but the access is restricted to a special class of leakage functions. More concretely, the adversary can corrupt several players and obtain their shares, as well as applying a leakage function from a specific class to the full share vector. We explicitly construct such leakage-resilient secret sharing with respect to affine leakage functions and low-degree multi-variate polynomial leakage functions, respectively. For affine leakage functions, we obtain schemes with threshold access structure that are leakage-resilient as long as there is a substantial difference between the total amount of information obtained by the adversary, through corrupting individual players and leaking from the full share vector, and the amount that the reconstruction algorithm requires for reconstructing the secret. Furthermore, if we assume the adversary is non-adaptive, we can even make the secret length asymptotically equal to the difference, as the share length grows. Specifically, we have a threshold scheme with parameters similar to Shamir’s scheme and is leakage-resilient against affine leakage. For multi-variate polynomial leakage functions with degree bigger than one, our constructions here only yield ramp schemes that are leakage-resilient against such leakage. Finally, as a result of independent interest, we show that our approach to leakage-resilient secret sharing also yields a competitive scheme compared with the state-of-the-art construction in the compartmentalized models.

Fuchun Lin, Mahdi Cheraghchi, Venkatesan Guruswami, Reihaneh Safavi-Naini, and Huaxiong Wang. Leakage-Resilient Secret Sharing in Non-Compartmentalized Models. In 1st Conference on Information-Theoretic Cryptography (ITC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 163, pp. 7:1-7:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)

Copy BibTex To Clipboard

@InProceedings{lin_et_al:LIPIcs.ITC.2020.7, author = {Lin, Fuchun and Cheraghchi, Mahdi and Guruswami, Venkatesan and Safavi-Naini, Reihaneh and Wang, Huaxiong}, title = {{Leakage-Resilient Secret Sharing in Non-Compartmentalized Models}}, booktitle = {1st Conference on Information-Theoretic Cryptography (ITC 2020)}, pages = {7:1--7:24}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-151-1}, ISSN = {1868-8969}, year = {2020}, volume = {163}, editor = {Tauman Kalai, Yael and Smith, Adam D. and Wichs, Daniel}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2020.7}, URN = {urn:nbn:de:0030-drops-121124}, doi = {10.4230/LIPIcs.ITC.2020.7}, annote = {Keywords: Leakage-resilient cryptography, Secret sharing scheme, Randomness extractor} }

Document

APPROX

**Published in:** LIPIcs, Volume 145, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)

We study the problem of approximating the value of a Unique Game instance in the streaming model. A simple count of the number of constraints divided by p, the alphabet size of the Unique Game, gives a trivial p-approximation that can be computed in O(log n) space. Meanwhile, with high probability, a sample of O~(n) constraints suffices to estimate the optimal value to (1+epsilon) accuracy. We prove that any single-pass streaming algorithm that achieves a (p-epsilon)-approximation requires Omega_epsilon(sqrt n) space. Our proof is via a reduction from lower bounds for a communication problem that is a p-ary variant of the Boolean Hidden Matching problem studied in the literature. Given the utility of Unique Games as a starting point for reduction to other optimization problems, our strong hardness for approximating Unique Games could lead to downstream hardness results for streaming approximability for other CSP-like problems.

Venkatesan Guruswami and Runzhou Tao. Streaming Hardness of Unique Games. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 145, pp. 5:1-5:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.APPROX-RANDOM.2019.5, author = {Guruswami, Venkatesan and Tao, Runzhou}, title = {{Streaming Hardness of Unique Games}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)}, pages = {5:1--5:12}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-125-2}, ISSN = {1868-8969}, year = {2019}, volume = {145}, editor = {Achlioptas, Dimitris and V\'{e}gh, L\'{a}szl\'{o} A.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2019.5}, URN = {urn:nbn:de:0030-drops-112209}, doi = {10.4230/LIPIcs.APPROX-RANDOM.2019.5}, annote = {Keywords: Communication complexity, CSP, Fourier Analysis, Lower bounds, Streaming algorithms, Unique Games} }

Document

APPROX

**Published in:** LIPIcs, Volume 145, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)

A k-uniform hypergraph is said to be r-rainbow colorable if there is an r-coloring of its vertices such that every hyperedge intersects all r color classes. Given as input such a hypergraph, finding a r-rainbow coloring of it is NP-hard for all k >= 3 and r >= 2. Therefore, one settles for finding a rainbow coloring with fewer colors (which is an easier task). When r=k (the maximum possible value), i.e., the hypergraph is k-partite, one can efficiently 2-rainbow color the hypergraph, i.e., 2-color its vertices so that there are no monochromatic edges. In this work we consider the next smaller value of r=k-1, and prove that in this case it is NP-hard to rainbow color the hypergraph with q := ceil[(k-2)/2] colors. In particular, for k <=6, it is NP-hard to 2-color (k-1)-rainbow colorable k-uniform hypergraphs.
Our proof follows the algebraic approach to promise constraint satisfaction problems. It proceeds by characterizing the polymorphisms associated with the approximate rainbow coloring problem, which are rainbow colorings of some product hypergraphs on vertex set [r]^n. We prove that any such polymorphism f: [r]^n -> [q] must be C-fixing, i.e., there is a small subset S of C coordinates and a setting a in [q]^S such that fixing x_{|S} = a determines the value of f(x). The key step in our proof is bounding the sensitivity of certain rainbow colorings, thereby arguing that they must be juntas. Armed with the C-fixing characterization, our NP-hardness is obtained via a reduction from smooth Label Cover.

Venkatesan Guruswami and Sai Sandeep. Rainbow Coloring Hardness via Low Sensitivity Polymorphisms. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 145, pp. 15:1-15:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.APPROX-RANDOM.2019.15, author = {Guruswami, Venkatesan and Sandeep, Sai}, title = {{Rainbow Coloring Hardness via Low Sensitivity Polymorphisms}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)}, pages = {15:1--15:17}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-125-2}, ISSN = {1868-8969}, year = {2019}, volume = {145}, editor = {Achlioptas, Dimitris and V\'{e}gh, L\'{a}szl\'{o} A.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2019.15}, URN = {urn:nbn:de:0030-drops-112303}, doi = {10.4230/LIPIcs.APPROX-RANDOM.2019.15}, annote = {Keywords: inapproximability, hardness of approximation, constraint satisfaction, hypergraph coloring, polymorphisms} }

Document

Track A: Algorithms, Complexity and Games

**Published in:** LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)

Local Reconstruction Codes (LRCs) allow for recovery from a small number of erasures in a local manner based on just a few other codeword symbols. They have emerged as the codes of choice for large scale distributed storage systems due to the very efficient repair of failed storage nodes in the typical scenario of a single or few nodes failing, while also offering fault tolerance against worst-case scenarios with more erasures. A maximally recoverable (MR) LRC offers the best possible blend of such local and global fault tolerance, guaranteeing recovery from all erasure patterns which are information-theoretically correctable given the presence of local recovery groups. In an (n,r,h,a)-LRC, the n codeword symbols are partitioned into r disjoint groups each of which include a local parity checks capable of locally correcting a erasures. The codeword symbols further obey h heavy (global) parity checks. Such a code is maximally recoverable if it can correct all patterns of a erasures per local group plus up to h additional erasures anywhere in the codeword. This property amounts to linear independence of all such subsets of columns of the parity check matrix.
MR LRCs have received much attention recently, with many explicit constructions covering different regimes of parameters. Unfortunately, all known constructions require a large field size that is exponential in h or a, and it is of interest to obtain MR LRCs of minimal possible field size. In this work, we develop an approach based on function fields to construct MR LRCs. Our method recovers, and in most parameter regimes improves, the field size of previous approaches. For instance, for the case of small r << epsilon log n and large h >=slant Omega(n^{1-epsilon}), we improve the field size from roughly n^h to n^{epsilon h}. For the case of a=1 (one local parity check), we improve the field size quadratically from r^{h(h+1)} to r^{h floor[(h+1)/2]} for some range of r. The improvements are modest, but more importantly are obtained in a unified manner via a promising new idea.

Venkatesan Guruswami, Lingfei Jin, and Chaoping Xing. Constructions of Maximally Recoverable Local Reconstruction Codes via Function Fields. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 68:1-68:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.ICALP.2019.68, author = {Guruswami, Venkatesan and Jin, Lingfei and Xing, Chaoping}, title = {{Constructions of Maximally Recoverable Local Reconstruction Codes via Function Fields}}, booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)}, pages = {68:1--68:14}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-109-2}, ISSN = {1868-8969}, year = {2019}, volume = {132}, editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.68}, URN = {urn:nbn:de:0030-drops-106449}, doi = {10.4230/LIPIcs.ICALP.2019.68}, annote = {Keywords: Erasure codes, Algebraic constructions, Linear algebra, Locally Repairable Codes, Explicit constructions} }

Document

Track A: Algorithms, Complexity and Games

**Published in:** LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)

We say a subset C subseteq {1,2,...,k}^n is a k-hash code (also called k-separated) if for every subset of k codewords from C, there exists a coordinate where all these codewords have distinct values. Understanding the largest possible rate (in bits), defined as (log_2 |C|)/n, of a k-hash code is a classical problem. It arises in two equivalent contexts: (i) the smallest size possible for a perfect hash family that maps a universe of N elements into {1,2,...,k}, and (ii) the zero-error capacity for decoding with lists of size less than k for a certain combinatorial channel.
A general upper bound of k!/k^{k-1} on the rate of a k-hash code (in the limit of large n) was obtained by Fredman and Komlós in 1984 for any k >= 4. While better bounds have been obtained for k=4, their original bound has remained the best known for each k >= 5. In this work, we present a method to obtain the first improvement to the Fredman-Komlós bound for every k >= 5, and we apply this method to give explicit numerical bounds for k=5, 6.

Venkatesan Guruswami and Andrii Riazanov. Beating Fredman-Komlós for Perfect k-Hashing. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 92:1-92:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.ICALP.2019.92, author = {Guruswami, Venkatesan and Riazanov, Andrii}, title = {{Beating Fredman-Koml\'{o}s for Perfect k-Hashing}}, booktitle = {46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)}, pages = {92:1--92:14}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-109-2}, ISSN = {1868-8969}, year = {2019}, volume = {132}, editor = {Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.92}, URN = {urn:nbn:de:0030-drops-106687}, doi = {10.4230/LIPIcs.ICALP.2019.92}, annote = {Keywords: Coding theory, perfect hashing, hash family, graph entropy, zero-error information theory} }

Document

**Published in:** LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)

Using a mild variant of polar codes we design linear compression schemes compressing Hidden Markov sources (where the source is a Markov chain, but whose state is not necessarily observable from its output), and to decode from Hidden Markov channels (where the channel has a state and the error introduced depends on the state). We give the first polynomial time algorithms that manage to compress and decompress (or encode and decode) at input lengths that are polynomial both in the gap to capacity and the mixing time of the Markov chain. Prior work achieved capacity only asymptotically in the limit of large lengths, and polynomial bounds were not available with respect to either the gap to capacity or mixing time. Our results operate in the setting where the source (or the channel) is known. If the source is unknown then compression at such short lengths would lead to effective algorithms for learning parity with noise - thus our results are the first to suggest a separation between the complexity of the problem when the source is known versus when it is unknown.

Venkatesan Guruswami, Preetum Nakkiran, and Madhu Sudan. Algorithmic Polarization for Hidden Markov Models. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 39:1-39:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.ITCS.2019.39, author = {Guruswami, Venkatesan and Nakkiran, Preetum and Sudan, Madhu}, title = {{Algorithmic Polarization for Hidden Markov Models}}, booktitle = {10th Innovations in Theoretical Computer Science Conference (ITCS 2019)}, pages = {39:1--39:19}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-095-8}, ISSN = {1868-8969}, year = {2019}, volume = {124}, editor = {Blum, Avrim}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.39}, URN = {urn:nbn:de:0030-drops-101326}, doi = {10.4230/LIPIcs.ITCS.2019.39}, annote = {Keywords: polar codes, error-correcting codes, compression, hidden markov model} }

Document

**Published in:** LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)

Shamir's celebrated secret sharing scheme provides an efficient method for encoding a secret of arbitrary length l among any N <= 2^l players such that for a threshold parameter t, (i) the knowledge of any t shares does not reveal any information about the secret and, (ii) any choice of t+1 shares fully reveals the secret. It is known that any such threshold secret sharing scheme necessarily requires shares of length l, and in this sense Shamir's scheme is optimal. The more general notion of ramp schemes requires the reconstruction of secret from any t+g shares, for a positive integer gap parameter g. Ramp secret sharing scheme necessarily requires shares of length l/g. Other than the bound related to secret length l, the share lengths of ramp schemes can not go below a quantity that depends only on the gap ratio g/N.
In this work, we study secret sharing in the extremal case of bit-long shares and arbitrarily small gap ratio g/N, where standard ramp secret sharing becomes impossible. We show, however, that a slightly relaxed but equally effective notion of semantic security for the secret, and negligible reconstruction error probability, eliminate the impossibility. Moreover, we provide explicit constructions of such schemes. One of the consequences of our relaxation is that, unlike standard ramp schemes with perfect secrecy, adaptive and non-adaptive adversaries need different analysis and construction. For non-adaptive adversaries, we explicitly construct secret sharing schemes that provide secrecy against any tau fraction of observed shares, and reconstruction from any rho fraction of shares, for any choices of 0 <= tau < rho <= 1. Our construction achieves secret length N(rho-tau-o(1)), which we show to be optimal. For adaptive adversaries, we construct explicit schemes attaining a secret length Omega(N(rho-tau)). We discuss our results and open questions.

Fuchun Lin, Mahdi Cheraghchi, Venkatesan Guruswami, Reihaneh Safavi-Naini, and Huaxiong Wang. Secret Sharing with Binary Shares. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 53:1-53:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)

Copy BibTex To Clipboard

@InProceedings{lin_et_al:LIPIcs.ITCS.2019.53, author = {Lin, Fuchun and Cheraghchi, Mahdi and Guruswami, Venkatesan and Safavi-Naini, Reihaneh and Wang, Huaxiong}, title = {{Secret Sharing with Binary Shares}}, booktitle = {10th Innovations in Theoretical Computer Science Conference (ITCS 2019)}, pages = {53:1--53:20}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-095-8}, ISSN = {1868-8969}, year = {2019}, volume = {124}, editor = {Blum, Avrim}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.53}, URN = {urn:nbn:de:0030-drops-101461}, doi = {10.4230/LIPIcs.ITCS.2019.53}, annote = {Keywords: Secret sharing scheme, Wiretap channel} }

Document

**Published in:** Dagstuhl Reports, Volume 8, Issue 6 (2019)

Constraint satisfaction has always played a central role in computational
complexity theory; appropriate versions of CSPs are classical complete
problems for most standard complexity classes. CSPs constitute a very rich and
yet sufficiently manageable class of problems to give a good perspective on
general computational phenomena. For instance, they help to understand which
mathematical properties make a computational problem tractable (in a wide
sense, e.g., polynomial-time solvable, non-trivially approximable,
fixed-parameter tractable, or definable in a weak logic). In the last decade,
research activity in this area has significantly intensified and hugely
impressive progress was made.
The Dagstuhl Seminar 18231 "The Constraint Satisfaction Problem: Complexity and
Approximability" was aimed at bringing together researchers using all the
different techniques in the study of the CSP so that they can share their
insights obtained during the past three years. This report documents the
material presented during the course of the seminar.

Martin Grohe, Venkatesan Guruswami, and Stanislav Zivny. The Constraint Satisfaction Problem: Complexity and Approximability (Dagstuhl Seminar 18231). In Dagstuhl Reports, Volume 8, Issue 6, pp. 1-18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)

Copy BibTex To Clipboard

@Article{grohe_et_al:DagRep.8.6.1, author = {Grohe, Martin and Guruswami, Venkatesan and Zivny, Stanislav}, title = {{The Constraint Satisfaction Problem: Complexity and Approximability (Dagstuhl Seminar 18231)}}, pages = {1--18}, journal = {Dagstuhl Reports}, ISSN = {2192-5283}, year = {2018}, volume = {8}, number = {6}, editor = {Grohe, Martin and Guruswami, Venkatesan and Zivny, Stanislav}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/DagRep.8.6.1}, URN = {urn:nbn:de:0030-drops-100251}, doi = {10.4230/DagRep.8.6.1}, annote = {Keywords: Constraint satisfaction problem (CSP); Computational complexity; CSP dichotomy conjecture; Hardness of approximation; Unique games conjecture; Parameterised complexity; Descriptive complexity; Universal algebra; Logic; Semidefinite programming} }

Document

**Published in:** LIPIcs, Volume 116, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)

We show that the entire class of polar codes (up to a natural necessary condition) converge to capacity at block lengths polynomial in the gap to capacity, while simultaneously achieving failure probabilities that are exponentially small in the block length (i.e., decoding fails with probability exp(-N^{Omega(1)}) for codes of length N). Previously this combination was known only for one specific family within the class of polar codes, whereas we establish this whenever the polar code exhibits a condition necessary for any polarization.
Our results adapt and strengthen a local analysis of polar codes due to the authors with Nakkiran and Rudra [Proc. STOC 2018]. Their analysis related the time-local behavior of a martingale to its global convergence, and this allowed them to prove that the broad class of polar codes converge to capacity at polynomial block lengths. Their analysis easily adapts to show exponentially small failure probabilities, provided the associated martingale, the "Arikan martingale", exhibits a corresponding strong local effect. The main contribution of this work is a much stronger local analysis of the Arikan martingale. This leads to the general result claimed above.
In addition to our general result, we also show, for the first time, polar codes that achieve failure probability exp(-N^{beta}) for any beta < 1 while converging to capacity at block length polynomial in the gap to capacity. Finally we also show that the "local" approach can be combined with any analysis of failure probability of an arbitrary polar code to get essentially the same failure probability while achieving block length polynomial in the gap to capacity.

Jaroslaw Blasiok, Venkatesan Guruswami, and Madhu Sudan. Polar Codes with Exponentially Small Error at Finite Block Length. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 116, pp. 34:1-34:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)

Copy BibTex To Clipboard

@InProceedings{blasiok_et_al:LIPIcs.APPROX-RANDOM.2018.34, author = {Blasiok, Jaroslaw and Guruswami, Venkatesan and Sudan, Madhu}, title = {{Polar Codes with Exponentially Small Error at Finite Block Length}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)}, pages = {34:1--34:17}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-085-9}, ISSN = {1868-8969}, year = {2018}, volume = {116}, editor = {Blais, Eric and Jansen, Klaus and D. P. Rolim, Jos\'{e} and Steurer, David}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2018.34}, URN = {urn:nbn:de:0030-drops-94382}, doi = {10.4230/LIPIcs.APPROX-RANDOM.2018.34}, annote = {Keywords: Polar codes, error exponent, rate of polarization} }

Document

**Published in:** LIPIcs, Volume 116, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)

A locally repairable code (LRC) with locality r allows for the recovery of any erased codeword symbol using only r other codeword symbols. A Singleton-type bound dictates the best possible trade-off between the dimension and distance of LRCs - an LRC attaining this trade-off is deemed optimal. Such optimal LRCs have been constructed over alphabets growing linearly in the block length. Unlike the classical Singleton bound, however, it was not known if such a linear growth in the alphabet size is necessary, or for that matter even if the alphabet needs to grow at all with the block length. Indeed, for small code distances 3,4, arbitrarily long optimal LRCs were known over fixed alphabets.
Here, we prove that for distances d >=slant 5, the code length n of an optimal LRC over an alphabet of size q must be at most roughly O(d q^3). For the case d=5, our upper bound is O(q^2). We complement these bounds by showing the existence of optimal LRCs of length Omega_{d,r}(q^{1+1/floor[(d-3)/2]}) when d <=slant r+2. Our bounds match when d=5, pinning down n=Theta(q^2) as the asymptotically largest length of an optimal LRC for this case.

Venkatesan Guruswami, Chaoping Xing, and Chen Yuan. How Long Can Optimal Locally Repairable Codes Be?. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 116, pp. 41:1-41:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.APPROX-RANDOM.2018.41, author = {Guruswami, Venkatesan and Xing, Chaoping and Yuan, Chen}, title = {{How Long Can Optimal Locally Repairable Codes Be?}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)}, pages = {41:1--41:11}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-085-9}, ISSN = {1868-8969}, year = {2018}, volume = {116}, editor = {Blais, Eric and Jansen, Klaus and D. P. Rolim, Jos\'{e} and Steurer, David}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2018.41}, URN = {urn:nbn:de:0030-drops-94458}, doi = {10.4230/LIPIcs.APPROX-RANDOM.2018.41}, annote = {Keywords: Locally Repairable Code, Singleton Bound} }

Document

**Published in:** LIPIcs, Volume 102, 33rd Computational Complexity Conference (CCC 2018)

For a vector space F^n over a field F, an (eta,beta)-dimension expander of degree d is a collection of d linear maps Gamma_j : F^n -> F^n such that for every subspace U of F^n of dimension at most eta n, the image of U under all the maps, sum_{j=1}^d Gamma_j(U), has dimension at least beta dim(U). Over a finite field, a random collection of d = O(1) maps Gamma_j offers excellent "lossless" expansion whp: beta ~~ d for eta >= Omega(1/d). When it comes to a family of explicit constructions (for growing n), however, achieving even modest expansion factor beta = 1+epsilon with constant degree is a non-trivial goal.
We present an explicit construction of dimension expanders over finite fields based on linearized polynomials and subspace designs, drawing inspiration from recent progress on list-decoding in the rank-metric. Our approach yields the following:
- Lossless expansion over large fields; more precisely beta >= (1-epsilon)d and eta >= (1-epsilon)/d with d = O_epsilon(1), when |F| >= Omega(n).
- Optimal up to constant factors expansion over fields of arbitrarily small polynomial size; more precisely beta >= Omega(delta d) and eta >= Omega(1/(delta d)) with d=O_delta(1), when |F| >= n^{delta}. Previously, an approach reducing to monotone expanders (a form of vertex expansion that is highly non-trivial to establish) gave (Omega(1),1+Omega(1))-dimension expanders of constant degree over all fields. An approach based on "rank condensing via subspace designs" led to dimension expanders with beta >rsim sqrt{d} over large fields. Ours is the first construction to achieve lossless dimension expansion, or even expansion proportional to the degree.

Venkatesan Guruswami, Nicolas Resch, and Chaoping Xing. Lossless Dimension Expanders via Linearized Polynomials and Subspace Designs. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 4:1-4:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.CCC.2018.4, author = {Guruswami, Venkatesan and Resch, Nicolas and Xing, Chaoping}, title = {{Lossless Dimension Expanders via Linearized Polynomials and Subspace Designs}}, booktitle = {33rd Computational Complexity Conference (CCC 2018)}, pages = {4:1--4:16}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-069-9}, ISSN = {1868-8969}, year = {2018}, volume = {102}, editor = {Servedio, Rocco A.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.4}, URN = {urn:nbn:de:0030-drops-88859}, doi = {10.4230/LIPIcs.CCC.2018.4}, annote = {Keywords: Algebraic constructions, coding theory, linear algebra, list-decoding, polynomial method, pseudorandomness} }

Document

**Published in:** LIPIcs, Volume 93, 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)

A hypergraph is k-rainbow colorable if there exists a vertex coloring using k colors such that each hyperedge has all the k colors. Unlike usual hypergraph coloring, rainbow coloring becomes harder as the number of colors increases. This work studies the rainbow colorability of hypergraphs which are guaranteed to be nearly balanced rainbow colorable. Specifically, we show that for any Q,k >= 2 and \ell <= k/2, given a Qk-uniform hypergraph which admits a k-rainbow coloring satisfying:
- in each hyperedge e, for some \ell_e <= \ell all but 2\ell_e colors occur exactly Q times and the rest (Q +/- 1) times,
it is NP-hard to compute an independent set of (1 - (\ell+1)/k + \eps)-fraction of vertices, for any constant \eps > 0. In particular, this implies the hardness of even (k/\ell)-rainbow coloring such hypergraphs.
The result is based on a novel long code PCP test that ensures the strong balancedness property desired of the k-rainbow coloring in the completeness case. The soundness analysis relies on a mixing bound based on uniform reverse hypercontractivity due to Mossel, Oleszkiewicz, and Sen, which was also used in earlier proofs of the hardness of \omega(1)-coloring 2-colorable 4-uniform hypergraphs due to Saket, and k-rainbow colorable 2k-uniform hypergraphs due to Guruswami and Lee.

Venkatesan Guruswami and Rishi Saket. Hardness of Rainbow Coloring Hypergraphs. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 33:1-33:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.FSTTCS.2017.33, author = {Guruswami, Venkatesan and Saket, Rishi}, title = {{Hardness of Rainbow Coloring Hypergraphs}}, booktitle = {37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)}, pages = {33:1--33:15}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-055-2}, ISSN = {1868-8969}, year = {2018}, volume = {93}, editor = {Lokam, Satya and Ramanujam, R.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.33}, URN = {urn:nbn:de:0030-drops-83905}, doi = {10.4230/LIPIcs.FSTTCS.2017.33}, annote = {Keywords: Fourier analysis of Boolean functions, hypergraph coloring, Inapproximability, Label Cover, PCP} }

Document

**Published in:** LIPIcs, Volume 81, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)

The Unique Games Conjecture (UGC) has pinned down the approximability of all constraint satisfaction problems (CSPs), showing that a natural semidefinite programming relaxation offers the optimal worst-case approximation ratio for any CSP. This elegant picture, however, does not apply for CSP instances that are perfectly satisfiable, due to the imperfect completeness inherent in the UGC. For the important case when the input CSP instance admits a satisfying assignment, it therefore remains wide open to understand how well it can be approximated.
This work is motivated by the pursuit of a better understanding of the inapproximability of perfectly satisfiable instances of CSPs. Our main conceptual contribution is the formulation of a (hypergraph) version of Label Cover which we call "V label cover." Assuming a conjecture concerning the inapproximability of V label cover on perfectly satisfiable instances, we prove the following implications:
* There is an absolute constant c0 such that for k >= 3, given a satisfiable instance of Boolean k-CSP, it is hard to find an assignment satisfying more than c0 k^2/2^k fraction of the constraints.
* Given a k-uniform hypergraph, k >= 2, for all epsilon > 0, it is hard to tell if it is q-strongly colorable or has no independent set with an epsilon fraction of vertices, where q = ceiling[k + sqrt(k) - 0.5].
* Given a k-uniform hypergraph, k >= 3, for all epsilon > 0, it is hard to tell if it is (k-1)-rainbow colorable or has no independent set with an epsilon fraction of vertices.
We further supplement the above results with a proof that an ``almost Unique'' version of Label Cover can be approximated within a constant factor on satisfiable instances.

Joshua Brakensiek and Venkatesan Guruswami. The Quest for Strong Inapproximability Results with Perfect Completeness. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 81, pp. 4:1-4:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)

Copy BibTex To Clipboard

@InProceedings{brakensiek_et_al:LIPIcs.APPROX-RANDOM.2017.4, author = {Brakensiek, Joshua and Guruswami, Venkatesan}, title = {{The Quest for Strong Inapproximability Results with Perfect Completeness}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)}, pages = {4:1--4:20}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-044-6}, ISSN = {1868-8969}, year = {2017}, volume = {81}, editor = {Jansen, Klaus and Rolim, Jos\'{e} D. P. and Williamson, David P. and Vempala, Santosh S.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2017.4}, URN = {urn:nbn:de:0030-drops-75537}, doi = {10.4230/LIPIcs.APPROX-RANDOM.2017.4}, annote = {Keywords: inapproximability, hardness of approximation, dictatorship testing, constraint satisfaction, hypergraph coloring} }

Document

**Published in:** LIPIcs, Volume 81, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)

We study the complexity of estimating the optimum value of a Boolean 2CSP (arity two constraint satisfaction problem) in the single-pass streaming setting, where the algorithm is presented the constraints in an arbitrary order. We give a streaming algorithm to estimate the optimum within a factor approaching 2/5 using logarithmic space, with high probability. This beats the trivial factor 1/4 estimate obtained by simply outputting 1/4-th of the total number of constraints.
The inspiration for our work is a lower bound of Kapralov, Khanna, and Sudan (SODA'15) who showed that a similar trivial estimate (of factor 1/2) is the best one can do for Max CUT. This lower bound implies that beating a factor 1/2 for Max DICUT (a special case of Max 2CSP), in particular, to distinguish between the case when the optimum is m/2 versus when it is at most (1/4+eps)m, where m is the total number of edges, requires polynomial space. We complement this hardness result by showing that for DICUT, one can distinguish between the case in which the optimum exceeds (1/2+eps)m and the case in which it is close to m/4.
We also prove that estimating the size of the maximum acyclic subgraph of a directed graph, when its edges are presented in a single-pass stream, within a factor better than 7/8 requires polynomial space.

Venkatesan Guruswami, Ameya Velingker, and Santhoshini Velusamy. Streaming Complexity of Approximating Max 2CSP and Max Acyclic Subgraph. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 81, pp. 8:1-8:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.APPROX-RANDOM.2017.8, author = {Guruswami, Venkatesan and Velingker, Ameya and Velusamy, Santhoshini}, title = {{Streaming Complexity of Approximating Max 2CSP and Max Acyclic Subgraph}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)}, pages = {8:1--8:19}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-044-6}, ISSN = {1868-8969}, year = {2017}, volume = {81}, editor = {Jansen, Klaus and Rolim, Jos\'{e} D. P. and Williamson, David P. and Vempala, Santosh S.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2017.8}, URN = {urn:nbn:de:0030-drops-75570}, doi = {10.4230/LIPIcs.APPROX-RANDOM.2017.8}, annote = {Keywords: approximation algorithms, constraint satisfaction problems, optimization, hardness of approximation, maximum acyclic subgraph} }

Document

**Published in:** LIPIcs, Volume 81, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)

For an n-variate order-d tensor A, define A_{max} := sup_{||x||_2 = 1} <A,x^(otimes d)>, to be the maximum value taken by the tensor on the unit sphere. It is known that for a random tensor with i.i.d. +1/-1 entries, A_{max} <= sqrt(n.d.log(d)) w.h.p. We study the problem of efficiently certifying upper bounds on A_{max} via the natural relaxation from the Sum of Squares (SoS) hierarchy. Our results include:
* When A is a random order-q tensor, we prove that q levels of SoS certifies an upper bound B on A_{max} that satisfies B <= A_{max} * (n/q^(1-o(1)))^(q/4-1/2) w.h.p. Our upper bound improves a result of Montanari and Richard (NIPS 2014) when q is large.
* We show the above bound is the best possible up to lower order terms, namely the optimum of the level-q SoS relaxation is at least
A_{max} * (n/q^(1+o(1)))^(q/4-1/2).
* When A is a random order-d tensor, we prove that q levels of SoS certifies an upper bound B on A_{max} that satisfies B <= A_{max} * (n*polylog/q)^(d/4 - 1/2) w.h.p. For growing q, this improves upon the bound certified by constant levels of SoS. This answers in part, a question posed by Hopkins, Shi, and Steurer (COLT 2015), who tightly characterized constant levels of SoS.

Vijay Bhattiprolu, Venkatesan Guruswami, and Euiwoong Lee. Sum-of-Squares Certificates for Maxima of Random Tensors on the Sphere. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 81, pp. 31:1-31:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)

Copy BibTex To Clipboard

@InProceedings{bhattiprolu_et_al:LIPIcs.APPROX-RANDOM.2017.31, author = {Bhattiprolu, Vijay and Guruswami, Venkatesan and Lee, Euiwoong}, title = {{Sum-of-Squares Certificates for Maxima of Random Tensors on the Sphere}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)}, pages = {31:1--31:20}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-044-6}, ISSN = {1868-8969}, year = {2017}, volume = {81}, editor = {Jansen, Klaus and Rolim, Jos\'{e} D. P. and Williamson, David P. and Vempala, Santosh S.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2017.31}, URN = {urn:nbn:de:0030-drops-75808}, doi = {10.4230/LIPIcs.APPROX-RANDOM.2017.31}, annote = {Keywords: Sum-of-Squares, Optimization over Sphere, Random Polynomials} }

Document

**Published in:** LIPIcs, Volume 81, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)

In error-correcting codes, locality refers to several different ways of quantifying how easily a small amount of information can be recovered from encoded data. In this work, we study a notion of locality called the s-Disjoint-Repair-Group Property (s-DRGP). This notion can interpolate between two very different settings in coding theory: that of Locally Correctable Codes (LCCs) when s is large - a very strong guarantee - and Locally Recoverable Codes (LRCs) when s is small - a relatively weaker guarantee. This motivates the study of the s-DRGP for intermediate s, which is the focus of our paper. We construct codes in this parameter regime which have a higher rate than previously known codes. Our construction is based on a novel variant of the lifted codes of Guo, Kopparty and Sudan. Beyond the results on the s-DRGP, we hope that our construction is of independent interest, and will find uses elsewhere.

S. Luna Frank-Fischer, Venkatesan Guruswami, and Mary Wootters. Locality via Partially Lifted Codes. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 81, pp. 43:1-43:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)

Copy BibTex To Clipboard

@InProceedings{frankfischer_et_al:LIPIcs.APPROX-RANDOM.2017.43, author = {Frank-Fischer, S. Luna and Guruswami, Venkatesan and Wootters, Mary}, title = {{Locality via Partially Lifted Codes}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)}, pages = {43:1--43:17}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-044-6}, ISSN = {1868-8969}, year = {2017}, volume = {81}, editor = {Jansen, Klaus and Rolim, Jos\'{e} D. P. and Williamson, David P. and Vempala, Santosh S.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2017.43}, URN = {urn:nbn:de:0030-drops-75922}, doi = {10.4230/LIPIcs.APPROX-RANDOM.2017.43}, annote = {Keywords: Error correcting codes, locality, lifted codes} }

Document

**Published in:** LIPIcs, Volume 81, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)

In the random deletion channel, each bit is deleted independently with probability p. For the random deletion channel, the existence of codes of rate (1-p)/9, and thus bounded away from 0 for any p < 1, has been known. We give an explicit construction with polynomial time encoding and deletion correction algorithms with rate c_0 (1-p) for an absolute constant c_0 > 0.

Venkatesan Guruswami and Ray Li. Efficiently Decodable Codes for the Binary Deletion Channel. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 81, pp. 47:1-47:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.APPROX-RANDOM.2017.47, author = {Guruswami, Venkatesan and Li, Ray}, title = {{Efficiently Decodable Codes for the Binary Deletion Channel}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)}, pages = {47:1--47:13}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-044-6}, ISSN = {1868-8969}, year = {2017}, volume = {81}, editor = {Jansen, Klaus and Rolim, Jos\'{e} D. P. and Williamson, David P. and Vempala, Santosh S.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2017.47}, URN = {urn:nbn:de:0030-drops-75964}, doi = {10.4230/LIPIcs.APPROX-RANDOM.2017.47}, annote = {Keywords: Coding theory, Combinatorics, Synchronization errors, Channel capacity} }

Document

**Published in:** LIPIcs, Volume 80, 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)

Subspace designs are a (large) collection of high-dimensional subspaces {H_i} of F_q^m such that for any low-dimensional subspace W, only a small number of subspaces from the collection have non-trivial intersection with W; more precisely, the sum of dimensions of W cap H_i is at most some parameter L. The notion was put forth by Guruswami and Xing (STOC'13) with applications to list decoding variants of Reed-Solomon and algebraic-geometric codes, and later also used for explicit rank-metric codes with optimal list decoding radius.
Guruswami and Kopparty (FOCS'13, Combinatorica'16) gave an explicit construction of subspace designs with near-optimal parameters. This construction was based on polynomials and has close connections to folded Reed-Solomon codes, and required large field size (specifically q >= m). Forbes and Guruswami (RANDOM'15) used this construction to give explicit constant degree "dimension expanders" over large fields, and noted that subspace designs are a powerful tool in linear-algebraic pseudorandomness.
Here, we construct subspace designs over any field, at the expense of a modest worsening of the bound $L$ on total intersection dimension. Our approach is based on a (non-trivial) extension of the polynomial-based construction to algebraic function fields, and instantiating the approach with cyclotomic function fields. Plugging in our new subspace designs in the construction of Forbes and Guruswami yields dimension expanders over F^n for any field F, with logarithmic degree and expansion guarantee for subspaces of dimension Omega(n/(log(log(n)))).

Venkatesan Guruswami, Chaoping Xing, and Chen Yuan. Subspace Designs Based on Algebraic Function Fields. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 86:1-86:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.ICALP.2017.86, author = {Guruswami, Venkatesan and Xing, Chaoping and Yuan, Chen}, title = {{Subspace Designs Based on Algebraic Function Fields}}, booktitle = {44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)}, pages = {86:1--86:10}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-041-5}, ISSN = {1868-8969}, year = {2017}, volume = {80}, editor = {Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.86}, URN = {urn:nbn:de:0030-drops-73712}, doi = {10.4230/LIPIcs.ICALP.2017.86}, annote = {Keywords: Subspace Design, Dimension Expander, List Decoding} }

Document

**Published in:** LIPIcs, Volume 50, 31st Conference on Computational Complexity (CCC 2016)

Suppose Alice holds a uniformly random string X in {0,1}^N and Bob holds a noisy version Y of X where each bit of X is flipped independently with probability epsilon in [0,1/2]. Alice and Bob would like to extract a common random string of min-entropy at least k. In this work, we establish the communication versus success probability trade-off for this problem by giving a protocol and a matching lower bound (under the restriction that the string to be agreed upon is determined by Alice's input X). Specifically, we prove that in order for Alice and Bob to agree on a common string with probability 2^{-gamma k} (gamma k >= 1), the optimal communication (up to o(k) terms, and achievable for large N) is precisely (C *(1-gamma) - 2 * sqrt{ C * (1-C) gamma}) * k, where C := 4 * epsilon * (1-epsilon). In particular, the optimal communication to achieve Omega(1) agreement probability approaches 4 * epsilon * (1-epsilon) * k.
We also consider the case when Y is the output of the binary erasure channel on X, where each bit of Y equals the corresponding bit of X with probability 1-epsilon and is otherwise erased (that is, replaced by a "?"). In this case, the communication required becomes (epsilon * (1-gamma) - 2 * sqrt{ epsilon * (1-epsilon) * gamma}) * k. In particular, the optimal communication to achieve Omega(1) agreement probability approaches epsilon * k, and with no communication the optimal agreement probability approaches 2^{- (1-sqrt{1-epsilon})/(1+sqrt{1-epsilon}) * k}.
Our protocols are based on covering codes and extend the approach of (Bogdanov and Mossel, 2011) for the zero-communication case. Our lower bounds rely on hypercontractive inequalities. For the model of bit-flips, our argument extends the approach of (Bogdanov and Mossel, 2011) by allowing communication; for the erasure model, to the best of our knowledge the needed hypercontractivity statement was not studied before, and it was established (given our application) by (Nair and Wang 2015). We also obtain information complexity lower bounds for these tasks, and together with our protocol, they shed light on the recently popular "most informative Boolean function" conjecture of Courtade and Kumar.

Venkatesan Guruswami and Jaikumar Radhakrishnan. Tight Bounds for Communication-Assisted Agreement Distillation. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 6:1-6:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.CCC.2016.6, author = {Guruswami, Venkatesan and Radhakrishnan, Jaikumar}, title = {{Tight Bounds for Communication-Assisted Agreement Distillation}}, booktitle = {31st Conference on Computational Complexity (CCC 2016)}, pages = {6:1--6:17}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-008-8}, ISSN = {1868-8969}, year = {2016}, volume = {50}, editor = {Raz, Ran}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.6}, URN = {urn:nbn:de:0030-drops-58450}, doi = {10.4230/LIPIcs.CCC.2016.6}, annote = {Keywords: communication complexity, covering codes, hypercontractivity, information theory, lower bounds, pseudorandomness} }

Document

**Published in:** LIPIcs, Volume 50, 31st Conference on Computational Complexity (CCC 2016)

Finding a proper coloring of a t-colorable graph G with t colors is a classic NP-hard problem when t >= 3. In this work, we investigate the approximate coloring problem in which the objective is to find a proper c-coloring of G where c >= t. We show that for all t >= 3, it is NP-hard to find a c-coloring when c <= 2t-2. In the regime where t is small, this improves, via a unified approach, the previously best known hardness result of c <= max{2t- 5, t + 2*floor(t/3) - 1} (Garey and Johnson 1976; Khanna, Linial, Safra, 1993; Guruswami, Khanna, 2000). For example, we show that 6-coloring a 4-colorable graph is NP-hard, improving on the NP-hardness of 5-coloring a 4-colorable graph.
We also generalize this to related problems on the strong coloring of hypergraphs. A k-uniform hypergraph H is t-strong colorable (where t >= k) if there is a t-coloring of the vertices such that no two vertices in each hyperedge of H have the same color. We show that if t = ceiling(3k/2), then it is NP-hard to find a 2-coloring of the vertices of H such that no hyperedge is monochromatic. We conjecture that a similar hardness holds for t=k+1.
We establish the NP-hardness of these problems by reducing from the hardness of the Label Cover problem, via a "dictatorship test" gadget graph. By combinatorially classifying all possible colorings of this graph, we can infer labels to provide to the label cover problem. This approach generalizes the "weak polymorphism" framework of (Austrin, Guruswami, Hastad, 2014), though interestingly our results are "PCP-free" in that they do not require any approximation gap in the starting Label Cover instance.

Joshua Brakensiek and Venkatesan Guruswami. New Hardness Results for Graph and Hypergraph Colorings. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 14:1-14:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)

Copy BibTex To Clipboard

@InProceedings{brakensiek_et_al:LIPIcs.CCC.2016.14, author = {Brakensiek, Joshua and Guruswami, Venkatesan}, title = {{New Hardness Results for Graph and Hypergraph Colorings}}, booktitle = {31st Conference on Computational Complexity (CCC 2016)}, pages = {14:1--14:27}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-008-8}, ISSN = {1868-8969}, year = {2016}, volume = {50}, editor = {Raz, Ran}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.14}, URN = {urn:nbn:de:0030-drops-58291}, doi = {10.4230/LIPIcs.CCC.2016.14}, annote = {Keywords: hardness of approximation, graph coloring, hypergraph coloring, polymor- phisms, combinatorics} }

Document

**Published in:** Dagstuhl Reports, Volume 5, Issue 7 (2016)

During the past two decades, an impressive array of diverse methods from several different mathematical fields, including algebra, logic, mathematical programming, probability theory, graph theory, and combinatorics, have been used to analyze both the computational complexity and approximabilty
of algorithmic tasks related to the constraint satisfaction problem (CSP),
as well as the applicability/limitations of algorithmic techniques.
This research direction develops at an impressive speed, regularly producing very strong and general results. The Dagstuhl Seminar 15301 "The Constraint Satisfaction Problem: Complexity and Approximability" was aimed at
bringing together researchers using all the different techniques in the study of the CSP, so that they can share their insights obtained during the past three years. This report documents the material presented during the course of the seminar.

Andrei A. Bulatov, Venkatesan Guruswami, Andrei Krokhin, and Dániel Marx. The Constraint Satisfaction Problem: Complexity and Approximability (Dagstuhl Seminar 15301). In Dagstuhl Reports, Volume 5, Issue 7, pp. 22-41, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)

Copy BibTex To Clipboard

@Article{bulatov_et_al:DagRep.5.7.22, author = {Bulatov, Andrei A. and Guruswami, Venkatesan and Krokhin, Andrei and Marx, D\'{a}niel}, title = {{The Constraint Satisfaction Problem: Complexity and Approximability (Dagstuhl Seminar 15301)}}, pages = {22--41}, journal = {Dagstuhl Reports}, ISSN = {2192-5283}, year = {2016}, volume = {5}, number = {7}, editor = {Bulatov, Andrei A. and Guruswami, Venkatesan and Krokhin, Andrei and Marx, D\'{a}niel}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/DagRep.5.7.22}, URN = {urn:nbn:de:0030-drops-56714}, doi = {10.4230/DagRep.5.7.22}, annote = {Keywords: Constraint satisfaction problem (CSP), Computational complexity, CSP dichotomy conjecture, Hardness of approximation, Unique games conjecture, Fixed-parameter tractability, Descriptive complexity, Universal algebra, Logic, Decomposition methods} }

Document

**Published in:** LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)

A hypergraph is said to be X-colorable if its vertices can be colored with X colors so that no hyperedge is monochromatic. 2-colorability is a fundamental property (called Property B) of hypergraphs and is extensively studied in combinatorics. Algorithmically, however, given a 2-colorable k-uniform hypergraph, it is NP-hard to find a 2-coloring miscoloring fewer than a fraction 2^(-k+1) of hyperedges (which is trivially achieved by a random 2-coloring), and the best algorithms to color the hypergraph properly require about n^(1-1/k) colors, approaching the trivial bound of n as k increases.
In this work, we study the complexity of approximate hypergraph coloring, for both the maximization (finding a 2-coloring with fewest miscolored edges) and minimization (finding a proper coloring using fewest number of colors) versions, when the input hypergraph is promised to have the following stronger properties than 2-colorability:
(A) Low-discrepancy: If the hypergraph has a 2-coloring of discrepancy l << sqrt(k), we give an algorithm to color the hypergraph with about n^(O(l^2/k)) colors. However, for the maximization version, we prove NP-hardness of finding a 2-coloring miscoloring a smaller than 2^(-O(k)) (resp. k^(-O(k))) fraction of the hyperedges when l = O(log k) (resp. l=2). Assuming the Unique Games conjecture, we improve the latter hardness factor to 2^(-O(k)) for almost discrepancy-1 hypergraphs.
(B) Rainbow colorability: If the hypergraph has a (k-l)-coloring such that each hyperedge is polychromatic with all these colors (this is stronger than a (l+1)-discrepancy 2-coloring), we give a 2-coloring algorithm that miscolors at most k^(-Omega(k)) of the hyperedges when l << sqrt(k), and complement this with a matching Unique Games hardness result showing that when l = sqrt(k), it is hard to even beat the 2^(-k+1) bound achieved by a random coloring.
(C) Strong Colorability: We obtain similar (stronger) Min- and Max-2-Coloring algorithmic results in the case of (k+l)-strong colorability.

Vijay V. S. P. Bhattiprolu, Venkatesan Guruswami, and Euiwoong Lee. Approximate Hypergraph Coloring under Low-discrepancy and Related Promises. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. 152-174, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)

Copy BibTex To Clipboard

@InProceedings{bhattiprolu_et_al:LIPIcs.APPROX-RANDOM.2015.152, author = {Bhattiprolu, Vijay V. S. P. and Guruswami, Venkatesan and Lee, Euiwoong}, title = {{Approximate Hypergraph Coloring under Low-discrepancy and Related Promises}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)}, pages = {152--174}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-939897-89-7}, ISSN = {1868-8969}, year = {2015}, volume = {40}, editor = {Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.152}, URN = {urn:nbn:de:0030-drops-53011}, doi = {10.4230/LIPIcs.APPROX-RANDOM.2015.152}, annote = {Keywords: Hypergraph Coloring, Discrepancy, Rainbow Coloring, Stong Coloring, Algorithms, Semidefinite Programming, Hardness of Approximation} }

Document

**Published in:** LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)

Given an undirected graph G=(V,E) and a fixed pattern graph H with k vertices, we consider the H-Transversal and H-Packing problems. The former asks to find the smallest subset S of vertices such that the subgraph induced by V - S does not have H as a subgraph, and the latter asks to find the maximum number of pairwise disjoint k-subsets S1, ..., Sm such that the subgraph induced by each Si has H as a subgraph.
We prove that if H is 2-connected, H-Transversal and H-Packing are almost as hard to approximate as general k-Hypergraph Vertex Cover and k-Set Packing, so it is NP-hard to approximate them within a factor of Omega(k) and Omega(k / polylog(k)) respectively. We also show that there is a 1-connected H where H-Transversal admits an O(log k)-approximation algorithm, so that the connectivity requirement cannot be relaxed from 2 to 1. For a special case of H-Transversal where H is a (family of) cycles, we mention the implication of our result to the related Feedback Vertex Set problem, and give a different hardness proof for directed graphs.

Venkatesan Guruswami and Euiwoong Lee. Inapproximability of H-Transversal/Packing. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. 284-304, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.APPROX-RANDOM.2015.284, author = {Guruswami, Venkatesan and Lee, Euiwoong}, title = {{Inapproximability of H-Transversal/Packing}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)}, pages = {284--304}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-939897-89-7}, ISSN = {1868-8969}, year = {2015}, volume = {40}, editor = {Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.284}, URN = {urn:nbn:de:0030-drops-53085}, doi = {10.4230/LIPIcs.APPROX-RANDOM.2015.284}, annote = {Keywords: Constraint Satisfaction Problems, Approximation resistance} }

Document

**Published in:** LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)

A Boolean constraint satisfaction problem (CSP) is called approximation resistant if independently setting variables to 1 with some probability achieves the best possible approximation ratio for the fraction of constraints satisfied. We study approximation resistance of a natural subclass of CSPs that we call Symmetric Constraint Satisfaction Problems (SCSPs), where satisfaction of each constraint only depends on the number of true literals in its scope. Thus a SCSP of arity k can be described by a subset of allowed number of true literals.
For SCSPs without negation, we conjecture that a simple sufficient condition to be approximation resistant by Austrin and Hastad is indeed necessary. We show that this condition has a compact analytic representation in the case of symmetric CSPs (depending only on the gap between the largest and smallest numbers in S), and provide the rationale behind our conjecture. We prove two interesting special cases of the conjecture, (i) when S is an interval and (ii) when S is even. For SCSPs with negation, we prove that the analogous sufficient condition by Austrin and Mossel is necessary for the same two cases, though we do not pose an analogous conjecture in general.

Venkatesan Guruswami and Euiwoong Lee. Towards a Characterization of Approximation Resistance for Symmetric CSPs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. 305-322, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.APPROX-RANDOM.2015.305, author = {Guruswami, Venkatesan and Lee, Euiwoong}, title = {{Towards a Characterization of Approximation Resistance for Symmetric CSPs}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)}, pages = {305--322}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-939897-89-7}, ISSN = {1868-8969}, year = {2015}, volume = {40}, editor = {Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.305}, URN = {urn:nbn:de:0030-drops-53095}, doi = {10.4230/LIPIcs.APPROX-RANDOM.2015.305}, annote = {Keywords: Constraint Satisfaction Problems, Approximation resistance} }

Document

**Published in:** LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)

An emerging theory of "linear algebraic pseudorandomness: aims to understand the linear algebraic analogs of fundamental Boolean pseudorandom objects where the rank of subspaces plays the role of the size of subsets. In this work, we study and highlight the interrelationships between several such algebraic objects such as subspace designs, dimension expanders, seeded rank condensers, two-source rank condensers, and rank-metric codes. In particular, with the recent construction of near-optimal subspace designs by Guruswami and Kopparty as a starting point, we construct good (seeded) rank condensers (both lossless and lossy versions), which are a small collection of linear maps F^n to F^t for t<<n such that for every subset of F^n of small rank, its rank is preserved (up to a constant factor in the lossy case) by at least one of the maps.
We then compose a tensoring operation with our lossy rank condenser to construct constant-degree dimension expanders over polynomially large fields. That is, we give a constant number of explicit linear maps A_i from F^n to F^n such that for any subspace V of F^n of dimension at most n/2, the dimension of the span of the A_i(V) is at least (1+Omega(1)) times the dimension of V. Previous constructions of such constant-degree dimension expanders were based on Kazhdan's property T (for the case when F has characteristic zero) or monotone expanders (for every field F); in either case the construction was harder than that of usual vertex expanders. Our construction, on the other hand, is simpler.
For two-source rank condensers, we observe that the lossless variant (where the output rank is the product of the ranks of the two sources) is equivalent to the notion of a linear rank-metric code. For the lossy case, using our seeded rank condensers, we give a reduction of the general problem to the case when the sources have high (n^Omega(1)) rank. When the sources have constant rank, combining this with an "inner condenser" found by brute-force leads to a two-source rank condenser with output length nearly matching the probabilistic constructions.

Michael A. Forbes and Venkatesan Guruswami. Dimension Expanders via Rank Condensers. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. 800-814, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)

Copy BibTex To Clipboard

@InProceedings{forbes_et_al:LIPIcs.APPROX-RANDOM.2015.800, author = {Forbes, Michael A. and Guruswami, Venkatesan}, title = {{Dimension Expanders via Rank Condensers}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)}, pages = {800--814}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-939897-89-7}, ISSN = {1868-8969}, year = {2015}, volume = {40}, editor = {Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.800}, URN = {urn:nbn:de:0030-drops-53379}, doi = {10.4230/LIPIcs.APPROX-RANDOM.2015.800}, annote = {Keywords: dimension expanders, rank condensers, rank-metric codes, subspace designs, Wronskians} }

Document

**Published in:** LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)

The noise model of deletions poses significant challenges in coding theory, with basic questions like the capacity of the binary deletion channel still being open. In this paper, we study the harder model of worst-case deletions, with a focus on constructing efficiently encodable and decodable codes for the two extreme regimes of high-noise and high-rate. Specifically, we construct polynomial-time decodable codes with the following trade-offs (for any epsilon > 0):
(1) Codes that can correct a fraction 1-epsilon of deletions with rate poly(eps) over an alphabet of size poly(1/epsilon); (2) Binary codes of rate 1-O~(sqrt(epsilon)) that can correct a fraction eps of deletions; and
(3) Binary codes that can be list decoded from a fraction (1/2-epsilon) of deletions with rate poly(epsion)
Our work is the first to achieve the qualitative goals of correcting a deletion fraction approaching 1 over bounded alphabets, and correcting a constant fraction of bit deletions with rate aproaching 1. The above results bring our understanding of deletion code constructions in these regimes to a similar level as worst-case errors.

Venkatesan Guruswami and Carol Wang. Deletion Codes in the High-noise and High-rate Regimes. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. 867-880, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.APPROX-RANDOM.2015.867, author = {Guruswami, Venkatesan and Wang, Carol}, title = {{Deletion Codes in the High-noise and High-rate Regimes}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)}, pages = {867--880}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-939897-89-7}, ISSN = {1868-8969}, year = {2015}, volume = {40}, editor = {Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.867}, URN = {urn:nbn:de:0030-drops-53417}, doi = {10.4230/LIPIcs.APPROX-RANDOM.2015.867}, annote = {Keywords: algorithmic coding theory, deletion codes, list decoding, probabilistic method, explicit constructions} }

Document

**Published in:** LIPIcs, Volume 33, 30th Conference on Computational Complexity (CCC 2015)

We prove a lower estimate on the increase in entropy when two copies of a conditional random variable X | Y, with X supported on Z_q={0,1,...,q-1} for prime q, are summed modulo q. Specifically, given two i.i.d. copies (X_1,Y_1) and (X_2,Y_2) of a pair of random variables (X,Y), with X taking values in Z_q, we show
H(X_1 + X_2 \mid Y_1, Y_2) - H(X|Y) >=e alpha(q) * H(X|Y) (1-H(X|Y))
for some alpha(q) > 0, where H(.) is the normalized (by factor log_2(q)) entropy. In particular, if X | Y is not close to being fully random or fully deterministic and H(X| Y) \in (gamma,1-gamma), then the entropy of the sum increases by Omega_q(gamma). Our motivation is an effective analysis of the finite-length behavior of polar codes, for which the linear dependence on gamma is quantitatively important. The assumption of q being prime is necessary: for X supported uniformly on a proper subgroup of Z_q we have H(X+X)=H(X). For X supported on infinite groups without a finite subgroup (the torsion-free case) and no conditioning, a sumset inequality for the absolute increase in (unnormalized) entropy was shown by Tao in [Tao, CP&R 2010].
We use our sumset inequality to analyze Ari kan's construction of polar codes and prove that for any q-ary source X, where q is any fixed prime, and anyepsilon > 0, polar codes allow efficient data compression of N i.i.d. copies of X into (H(X)+epsilon)N q-ary symbols, as soon as N is polynomially large in 1/epsilon. We can get capacity-achieving source codes with similar guarantees for composite alphabets, by factoring q into primes and combining different polar codes for each prime in factorization.
A consequence of our result for noisy channel coding is that for all discrete memoryless channels, there are explicit codes enabling reliable communication within epsilon > 0 of the symmetric Shannon capacity for a block length and decoding complexity bounded by a polynomial in 1/epsilon. The result was previously shown for the special case of binary-input channels [Guruswami/Xial, FOCS'13; Hassani/Alishahi/Urbanke, CoRR 2013], and this work extends the result to channels over any alphabet.

Venkatesan Guruswami and Ameya Velingker. An Entropy Sumset Inequality and Polynomially Fast Convergence to Shannon Capacity Over All Alphabets. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 42-57, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.CCC.2015.42, author = {Guruswami, Venkatesan and Velingker, Ameya}, title = {{An Entropy Sumset Inequality and Polynomially Fast Convergence to Shannon Capacity Over All Alphabets}}, booktitle = {30th Conference on Computational Complexity (CCC 2015)}, pages = {42--57}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-939897-81-1}, ISSN = {1868-8969}, year = {2015}, volume = {33}, editor = {Zuckerman, David}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.42}, URN = {urn:nbn:de:0030-drops-50755}, doi = {10.4230/LIPIcs.CCC.2015.42}, annote = {Keywords: Polar codes, polynomial gap to capacity, entropy sumset inequality, arbitrary alphabets} }

Document

**Published in:** LIPIcs, Volume 28, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)

We construct an explicit family of linear rank-metric codes over any field F that enables efficient list decoding up to a fraction rho of errors in the rank metric with a rate of 1-rho-eps, for any desired rho in (0,1) and eps > 0. Previously, a Monte Carlo construction of such codes was known, but this is in fact the first explicit construction of positive rate rank-metric codes for list decoding beyond the unique decoding radius.
Our codes are explicit subcodes of the well-known Gabidulin codes, which encode linearized polynomials of low degree via their values at a collection of linearly independent points. The subcode is picked by restricting the message polynomials to an F-subspace that evades certain structured subspaces over an extension field of F. These structured spaces arise from the linear-algebraic list decoder for Gabidulin codes due to Guruswami and Xing (STOC'13). Our construction is obtained by combining subspace designs constructed by Guruswami and Kopparty (FOCS'13) with subspace-evasive varieties due to Dvir and Lovett (STOC'12).
We establish a similar result for subspace codes, which are a collection of subspaces, every pair of which have low-dimensional intersection, and which have received much attention recently in the context of network coding. We also give explicit subcodes of folded Reed-Solomon (RS) codes with small folding order that are list-decodable (in the Hamming metric) with optimal redundancy, motivated by the fact that list decoding RS codes reduces to list decoding such folded RS codes. However, as we only list decode a subcode of these codes, the Johnson radius continues to be the best known error fraction for list decoding RS codes.

Venkatesan Guruswami and Carol Wang. Evading Subspaces Over Large Fields and Explicit List-decodable Rank-metric Codes. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 748-761, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)

Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.APPROX-RANDOM.2014.748, author = {Guruswami, Venkatesan and Wang, Carol}, title = {{Evading Subspaces Over Large Fields and Explicit List-decodable Rank-metric Codes}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)}, pages = {748--761}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-939897-74-3}, ISSN = {1868-8969}, year = {2014}, volume = {28}, editor = {Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.748}, URN = {urn:nbn:de:0030-drops-47361}, doi = {10.4230/LIPIcs.APPROX-RANDOM.2014.748}, annote = {Keywords: list-decoding, pseudorandomness, algebraic coding, explicit constructions} }

Document

Invited Talk

**Published in:** LIPIcs, Volume 24, IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2013)

Shannon's monumental 1948 work laid the foundations for the rich fields of information and coding theory. The quest for efficient coding schemes to approach Shannon capacity has occupied researchers ever since, with spectacular progress enabling the widespread use of error-correcting codes in practice. Yet the theoretical problem of approaching capacity arbitrarily closely with polynomial complexity remained open except in the special case of erasure channels.
In 2008, Arikan proposed an insightful new method for constructing capacity-achieving codes based on channel polarization. In this talk, I will begin with a self-contained survey of Arikan's celebrated construction of polar codes, and then discuss our recent proof (with Patrick Xia) that, for all binary-input symmetric memoryless channels, polar codes enable reliable communication at rates within epsilon > 0 of the Shannon capacity with block length (delay), construction complexity, and decoding complexity all bounded by a polynomial in the gap to capacity, i.e., by poly(1/epsilon). Polar coding gives the first explicit construction with rigorous proofs of all these properties; previous constructions were not known to achieve capacity with less than exp(1/epsilon) decoding complexity.
We establish the capacity-achieving property of polar codes via a direct analysis of the underlying martingale of conditional entropies, without relying on the martingale convergence theorem. This step gives rough polarization (noise levels epsilon for the good channels), which can then be adequately amplified by tracking the decay of the channel Bhattacharyya parameters. Our effective bounds imply that polar codes can have block length bounded by
poly(1/epsilon). We also show that the generator matrix of such polar codes can be constructed in polynomial time by algorithmically computing an adequate approximation of the polarization process.

Venkatesan Guruswami. Polar Codes: Reliable Communication with Complexity Polynomial in the Gap to Shannon Capacity (Invited Talk). In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2013). Leibniz International Proceedings in Informatics (LIPIcs), Volume 24, p. 1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)

Copy BibTex To Clipboard

@InProceedings{guruswami:LIPIcs.FSTTCS.2013.1, author = {Guruswami, Venkatesan}, title = {{Polar Codes: Reliable Communication with Complexity Polynomial in the Gap to Shannon Capacity}}, booktitle = {IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2013)}, pages = {1--1}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-939897-64-4}, ISSN = {1868-8969}, year = {2013}, volume = {24}, editor = {Seth, Anil and Vishnoi, Nisheeth K.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2013.1}, URN = {urn:nbn:de:0030-drops-43992}, doi = {10.4230/LIPIcs.FSTTCS.2013.1}, annote = {Keywords: Error-correction algorithms, Linear Codes, Shannon capacity, Martingale convergence, Computational complexity} }

X

Feedback for Dagstuhl Publishing

Feedback submitted

Please try again later or send an E-mail