Search Results

Documents authored by Sudan, Madhu


Document
Track A: Algorithms, Complexity and Games
Almost-Tight Bounds on Preserving Cuts in Classes of Submodular Hypergraphs

Authors: Sanjeev Khanna, Aaron (Louie) Putterman, and Madhu Sudan

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
Recently, a number of variants of the notion of cut-preserving hypergraph sparsification have been studied in the literature. These variants include directed hypergraph sparsification, submodular hypergraph sparsification, general notions of approximation including spectral approximations, and more general notions like sketching that can answer cut queries using more general data structures than just sparsifiers. In this work, we provide reductions between these different variants of hypergraph sparsification and establish new upper and lower bounds on the space complexity of preserving their cuts. Specifically, we show that: 1) (1 ± ε) directed hypergraph spectral (respectively cut) sparsification on n vertices efficiently reduces to (1 ± ε) undirected hypergraph spectral (respectively cut) sparsification on n² + 1 vertices. Using the work of Lee and Jambulapati, Liu, and Sidford (STOC 2023) this gives us directed hypergraph spectral sparsifiers with O(n² log²(n) / ε²) hyperedges and directed hypergraph cut sparsifiers with O(n² log(n)/ ε²) hyperedges by using the work of Chen, Khanna, and Nagda (FOCS 2020), both of which improve upon the work of Oko, Sakaue, and Tanigawa (ICALP 2023). 2) Any cut sketching scheme which preserves all cuts in any directed hypergraph on n vertices to a (1 ± ε) factor (for ε = 1/(2^{O(√{log(n)})})) must have worst-case bit complexity n^{3 - o(1)}. Because directed hypergraphs are a subclass of submodular hypergraphs, this also shows a worst-case sketching lower bound of n^{3 - o(1)} bits for sketching cuts in general submodular hypergraphs. 3) (1 ± ε) monotone submodular hypergraph cut sparsification on n vertices efficiently reduces to (1 ± ε) symmetric submodular hypergraph sparsification on n+1 vertices. Using the work of Jambulapati et. al. (FOCS 2023) this gives us monotone submodular hypergraph sparsifiers with Õ(n / ε²) hyperedges, improving on the O(n³ / ε²) hyperedge bound of Kenneth and Krauthgamer (arxiv 2023). At a high level, our results use the same general principle, namely, by showing that cuts in one class of hypergraphs can be simulated by cuts in a simpler class of hypergraphs, we can leverage sparsification results for the simpler class of hypergraphs.

Cite as

Sanjeev Khanna, Aaron (Louie) Putterman, and Madhu Sudan. Almost-Tight Bounds on Preserving Cuts in Classes of Submodular Hypergraphs. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 98:1-98:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{khanna_et_al:LIPIcs.ICALP.2024.98,
  author =	{Khanna, Sanjeev and Putterman, Aaron (Louie) and Sudan, Madhu},
  title =	{{Almost-Tight Bounds on Preserving Cuts in Classes of Submodular Hypergraphs}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{98:1--98:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.98},
  URN =		{urn:nbn:de:0030-drops-202410},
  doi =		{10.4230/LIPIcs.ICALP.2024.98},
  annote =	{Keywords: Sparsification, sketching, hypergraphs}
}
Document
RANDOM
Low-Degree Testing over Grids

Authors: Prashanth Amireddy, Srikanth Srinivasan, and Madhu Sudan

Published in: LIPIcs, Volume 275, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)


Abstract
We study the question of local testability of low (constant) degree functions from a product domain 𝒮_1 × … × 𝒮_n to a field 𝔽, where 𝒮_i ⊆ 𝔽 can be arbitrary constant sized sets. We show that this family is locally testable when the grid is "symmetric". That is, if 𝒮_i = 𝒮 for all i, there is a probabilistic algorithm using constantly many queries that distinguishes whether f has a polynomial representation of degree at most d or is Ω(1)-far from having this property. In contrast, we show that there exist asymmetric grids with |𝒮_1| = ⋯ = |𝒮_n| = 3 for which testing requires ω_n(1) queries, thereby establishing that even in the context of polynomials, local testing depends on the structure of the domain and not just the distance of the underlying code. The low-degree testing problem has been studied extensively over the years and a wide variety of tools have been applied to propose and analyze tests. Our work introduces yet another new connection in this rich field, by building low-degree tests out of tests for "junta-degrees". A function f:𝒮_1 × ⋯ × 𝒮_n → 𝒢, for an abelian group 𝒢 is said to be a junta-degree-d function if it is a sum of d-juntas. We derive our low-degree test by giving a new local test for junta-degree-d functions. For the analysis of our tests, we deduce a small-set expansion theorem for spherical/hamming noise over large grids, which may be of independent interest.

Cite as

Prashanth Amireddy, Srikanth Srinivasan, and Madhu Sudan. Low-Degree Testing over Grids. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 275, pp. 41:1-41:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{amireddy_et_al:LIPIcs.APPROX/RANDOM.2023.41,
  author =	{Amireddy, Prashanth and Srinivasan, Srikanth and Sudan, Madhu},
  title =	{{Low-Degree Testing over Grids}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)},
  pages =	{41:1--41:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-296-9},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{275},
  editor =	{Megow, Nicole and Smith, Adam},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2023.41},
  URN =		{urn:nbn:de:0030-drops-188665},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2023.41},
  annote =	{Keywords: Property testing, Low-degree testing, Small-set expansion, Local testing}
}
Document
Is This Correct? Let’s Check!

Authors: Omri Ben-Eliezer, Dan Mikulincer, Elchanan Mossel, and Madhu Sudan

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
Societal accumulation of knowledge is a complex process. The correctness of new units of knowledge depends not only on the correctness of new reasoning, but also on the correctness of old units that the new one builds on. The errors in such accumulation processes are often remedied by error correction and detection heuristics. Motivating examples include the scientific process based on scientific publications, and software development based on libraries of code. Natural processes that aim to keep errors under control, such as peer review in scientific publications, and testing and debugging in software development, would typically check existing pieces of knowledge - both for the reasoning that generated them and the previous facts they rely on. In this work, we present a simple process that models such accumulation of knowledge and study the persistence (or lack thereof) of errors. We consider a simple probabilistic model for the generation of new units of knowledge based on the preferential attachment growth model, which additionally allows for errors. Furthermore, the process includes checks aimed at catching these errors. We investigate when effects of errors persist forever in the system (with positive probability) and when they get rooted out completely by the checking process. The two basic parameters associated with the checking process are the probability of conducting a check and the depth of the check. We show that errors are rooted out if checks are sufficiently frequent and sufficiently deep. In contrast, shallow or infrequent checks are insufficient to root out errors.

Cite as

Omri Ben-Eliezer, Dan Mikulincer, Elchanan Mossel, and Madhu Sudan. Is This Correct? Let’s Check!. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 15:1-15:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{beneliezer_et_al:LIPIcs.ITCS.2023.15,
  author =	{Ben-Eliezer, Omri and Mikulincer, Dan and Mossel, Elchanan and Sudan, Madhu},
  title =	{{Is This Correct? Let’s Check!}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{15:1--15:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.15},
  URN =		{urn:nbn:de:0030-drops-175180},
  doi =		{10.4230/LIPIcs.ITCS.2023.15},
  annote =	{Keywords: Error Propagation, Preferential Attachment}
}
Document
APPROX
Sketching Approximability of (Weak) Monarchy Predicates

Authors: Chi-Ning Chou, Alexander Golovnev, Amirbehshad Shahrasbi, Madhu Sudan, and Santhoshini Velusamy

Published in: LIPIcs, Volume 245, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)


Abstract
We analyze the sketching approximability of constraint satisfaction problems on Boolean domains, where the constraints are balanced linear threshold functions applied to literals. In particular, we explore the approximability of monarchy-like functions where the value of the function is determined by a weighted combination of the vote of the first variable (the president) and the sum of the votes of all remaining variables. The pure version of this function is when the president can only be overruled by when all remaining variables agree. For every k ≥ 5, we show that CSPs where the underlying predicate is a pure monarchy function on k variables have no non-trivial sketching approximation algorithm in o(√n) space. We also show infinitely many weaker monarchy functions for which CSPs using such constraints are non-trivially approximable by O(log(n)) space sketching algorithms. Moreover, we give the first example of sketching approximable asymmetric Boolean CSPs. Our results work within the framework of Chou, Golovnev, Sudan, and Velusamy (FOCS 2021) that characterizes the sketching approximability of all CSPs. Their framework can be applied naturally to get a computer-aided analysis of the approximability of any specific constraint satisfaction problem. The novelty of our work is in using their work to get an analysis that applies to infinitely many problems simultaneously.

Cite as

Chi-Ning Chou, Alexander Golovnev, Amirbehshad Shahrasbi, Madhu Sudan, and Santhoshini Velusamy. Sketching Approximability of (Weak) Monarchy Predicates. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 245, pp. 35:1-35:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{chou_et_al:LIPIcs.APPROX/RANDOM.2022.35,
  author =	{Chou, Chi-Ning and Golovnev, Alexander and Shahrasbi, Amirbehshad and Sudan, Madhu and Velusamy, Santhoshini},
  title =	{{Sketching Approximability of (Weak) Monarchy Predicates}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)},
  pages =	{35:1--35:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-249-5},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{245},
  editor =	{Chakrabarti, Amit and Swamy, Chaitanya},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2022.35},
  URN =		{urn:nbn:de:0030-drops-171573},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2022.35},
  annote =	{Keywords: sketching algorithms, approximability, linear threshold functions}
}
Document
Invited Talk
Streaming and Sketching Complexity of CSPs: A Survey (Invited Talk)

Authors: Madhu Sudan

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
In this survey we describe progress over the last decade or so in understanding the complexity of solving constraint satisfaction problems (CSPs) approximately in the streaming and sketching models of computation. After surveying some of the results we give some sketches of the proofs and in particular try to explain why there is a tight dichotomy result for sketching algorithms working in subpolynomial space regime.

Cite as

Madhu Sudan. Streaming and Sketching Complexity of CSPs: A Survey (Invited Talk). In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 5:1-5:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{sudan:LIPIcs.ICALP.2022.5,
  author =	{Sudan, Madhu},
  title =	{{Streaming and Sketching Complexity of CSPs: A Survey}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{5:1--5:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.5},
  URN =		{urn:nbn:de:0030-drops-163460},
  doi =		{10.4230/LIPIcs.ICALP.2022.5},
  annote =	{Keywords: Streaming algorithms, Sketching algorithms, Dichotomy, Communication Complexity}
}
Document
APPROX
Streaming Approximation Resistance of Every Ordering CSP

Authors: Noah Singer, Madhu Sudan, and Santhoshini Velusamy

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
An ordering constraint satisfaction problem (OCSP) is given by a positive integer k and a constraint predicate Π mapping permutations on {1,…,k} to {0,1}. Given an instance of OCSP(Π) on n variables and m constraints, the goal is to find an ordering of the n variables that maximizes the number of constraints that are satisfied, where a constraint specifies a sequence of k distinct variables and the constraint is satisfied by an ordering on the n variables if the ordering induced on the k variables in the constraint satisfies Π. Ordering constraint satisfaction problems capture natural problems including "Maximum acyclic subgraph (MAS)" and "Betweenness". In this work we consider the task of approximating the maximum number of satisfiable constraints in the (single-pass) streaming setting, where an instance is presented as a stream of constraints. We show that for every Π, OCSP(Π) is approximation-resistant to o(n)-space streaming algorithms, i.e., algorithms using o(n) space cannot distinguish streams where almost every constraint is satisfiable from streams where no ordering beats the random ordering by a noticeable amount. This space bound is tight up to polylogarithmic factors. In the case of MAS our result shows that for every ε > 0, MAS is not 1/2+ε-approximable in o(n) space. The previous best inapproximability result only ruled out a 3/4-approximation in o(√ n) space. Our results build on recent works of Chou, Golovnev, Sudan, Velingker, and Velusamy who show tight, linear-space inapproximability results for a broad class of (non-ordering) constraint satisfaction problems (CSPs) over arbitrary (finite) alphabets. Our results are obtained by building a family of appropriate CSPs (one for every q) from any given OCSP, and applying their work to this family of CSPs. To convert the resulting hardness results for CSPs back to our OCSP, we show that the hard instances from this earlier work have the following "small-set expansion" property: If the CSP instance is viewed as a hypergraph in the natural way, then for every partition of the hypergraph into small blocks most of the hyperedges are incident on vertices from distinct blocks. By exploiting this combinatorial property, in combination with the hardness results of the resulting families of CSPs, we give optimal inapproximability results for all OCSPs.

Cite as

Noah Singer, Madhu Sudan, and Santhoshini Velusamy. Streaming Approximation Resistance of Every Ordering CSP. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 17:1-17:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{singer_et_al:LIPIcs.APPROX/RANDOM.2021.17,
  author =	{Singer, Noah and Sudan, Madhu and Velusamy, Santhoshini},
  title =	{{Streaming Approximation Resistance of Every Ordering CSP}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{17:1--17:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.17},
  URN =		{urn:nbn:de:0030-drops-147106},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.17},
  annote =	{Keywords: Streaming approximations, approximation resistance, constraint satisfaction problems, ordering constraint satisfaction problems}
}
Document
RANDOM
Ideal-Theoretic Explanation of Capacity-Achieving Decoding

Authors: Siddharth Bhandari, Prahladh Harsha, Mrinal Kumar, and Madhu Sudan

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
In this work, we present an abstract framework for some algebraic error-correcting codes with the aim of capturing codes that are list-decodable to capacity, along with their decoding algorithm. In the polynomial ideal framework, a code is specified by some ideals in a polynomial ring, messages are polynomials and their encoding is the residue modulo the ideals. We present an alternate way of viewing this class of codes in terms of linear operators, and show that this alternate view makes their algorithmic list-decodability amenable to analysis. Our framework leads to a new class of codes that we call affine Folded Reed-Solomon codes (which are themselves a special case of the broader class we explore). These codes are common generalizations of the well-studied Folded Reed-Solomon codes and Univariate Multiplicity codes, while also capturing the less-studied Additive Folded Reed-Solomon codes as well as a large family of codes that were not previously known/studied. More significantly our framework also captures the algorithmic list-decodability of the constituent codes. Specifically, we present a unified view of the decoding algorithm for ideal-theoretic codes and show that the decodability reduces to the analysis of the distance of some related codes. We show that good bounds on this distance lead to capacity-achieving performance of the underlying code, providing a unifying explanation of known capacity-achieving results. In the specific case of affine Folded Reed-Solomon codes, our framework shows that they are list-decodable up to capacity (for appropriate setting of the parameters), thereby unifying the previous results for Folded Reed-Solomon, Multiplicity and Additive Folded Reed-Solomon codes.

Cite as

Siddharth Bhandari, Prahladh Harsha, Mrinal Kumar, and Madhu Sudan. Ideal-Theoretic Explanation of Capacity-Achieving Decoding. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 56:1-56:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bhandari_et_al:LIPIcs.APPROX/RANDOM.2021.56,
  author =	{Bhandari, Siddharth and Harsha, Prahladh and Kumar, Mrinal and Sudan, Madhu},
  title =	{{Ideal-Theoretic Explanation of Capacity-Achieving Decoding}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{56:1--56:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.56},
  URN =		{urn:nbn:de:0030-drops-147499},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.56},
  annote =	{Keywords: List Decodability, List Decoding Capacity, Polynomial Ideal Codes, Multiplicity Codes, Folded Reed-Solomon Codes}
}
Document
Algorithmic Polarization for Hidden Markov Models

Authors: Venkatesan Guruswami, Preetum Nakkiran, and Madhu Sudan

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
Using a mild variant of polar codes we design linear compression schemes compressing Hidden Markov sources (where the source is a Markov chain, but whose state is not necessarily observable from its output), and to decode from Hidden Markov channels (where the channel has a state and the error introduced depends on the state). We give the first polynomial time algorithms that manage to compress and decompress (or encode and decode) at input lengths that are polynomial both in the gap to capacity and the mixing time of the Markov chain. Prior work achieved capacity only asymptotically in the limit of large lengths, and polynomial bounds were not available with respect to either the gap to capacity or mixing time. Our results operate in the setting where the source (or the channel) is known. If the source is unknown then compression at such short lengths would lead to effective algorithms for learning parity with noise - thus our results are the first to suggest a separation between the complexity of the problem when the source is known versus when it is unknown.

Cite as

Venkatesan Guruswami, Preetum Nakkiran, and Madhu Sudan. Algorithmic Polarization for Hidden Markov Models. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 39:1-39:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.ITCS.2019.39,
  author =	{Guruswami, Venkatesan and Nakkiran, Preetum and Sudan, Madhu},
  title =	{{Algorithmic Polarization for Hidden Markov Models}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{39:1--39:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.39},
  URN =		{urn:nbn:de:0030-drops-101326},
  doi =		{10.4230/LIPIcs.ITCS.2019.39},
  annote =	{Keywords: polar codes, error-correcting codes, compression, hidden markov model}
}
Document
Polar Codes with Exponentially Small Error at Finite Block Length

Authors: Jaroslaw Blasiok, Venkatesan Guruswami, and Madhu Sudan

Published in: LIPIcs, Volume 116, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)


Abstract
We show that the entire class of polar codes (up to a natural necessary condition) converge to capacity at block lengths polynomial in the gap to capacity, while simultaneously achieving failure probabilities that are exponentially small in the block length (i.e., decoding fails with probability exp(-N^{Omega(1)}) for codes of length N). Previously this combination was known only for one specific family within the class of polar codes, whereas we establish this whenever the polar code exhibits a condition necessary for any polarization. Our results adapt and strengthen a local analysis of polar codes due to the authors with Nakkiran and Rudra [Proc. STOC 2018]. Their analysis related the time-local behavior of a martingale to its global convergence, and this allowed them to prove that the broad class of polar codes converge to capacity at polynomial block lengths. Their analysis easily adapts to show exponentially small failure probabilities, provided the associated martingale, the "Arikan martingale", exhibits a corresponding strong local effect. The main contribution of this work is a much stronger local analysis of the Arikan martingale. This leads to the general result claimed above. In addition to our general result, we also show, for the first time, polar codes that achieve failure probability exp(-N^{beta}) for any beta < 1 while converging to capacity at block length polynomial in the gap to capacity. Finally we also show that the "local" approach can be combined with any analysis of failure probability of an arbitrary polar code to get essentially the same failure probability while achieving block length polynomial in the gap to capacity.

Cite as

Jaroslaw Blasiok, Venkatesan Guruswami, and Madhu Sudan. Polar Codes with Exponentially Small Error at Finite Block Length. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 116, pp. 34:1-34:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{blasiok_et_al:LIPIcs.APPROX-RANDOM.2018.34,
  author =	{Blasiok, Jaroslaw and Guruswami, Venkatesan and Sudan, Madhu},
  title =	{{Polar Codes with Exponentially Small Error at Finite Block Length}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)},
  pages =	{34:1--34:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-085-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{116},
  editor =	{Blais, Eric and Jansen, Klaus and D. P. Rolim, Jos\'{e} and Steurer, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2018.34},
  URN =		{urn:nbn:de:0030-drops-94382},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2018.34},
  annote =	{Keywords: Polar codes, error exponent, rate of polarization}
}
Document
Synchronization Strings: List Decoding for Insertions and Deletions

Authors: Bernhard Haeupler, Amirbehshad Shahrasbi, and Madhu Sudan

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
We study codes that are list-decodable under insertions and deletions ("insdel codes"). Specifically, we consider the setting where, given a codeword x of length n over some finite alphabet Sigma of size q, delta * n codeword symbols may be adversarially deleted and gamma * n symbols may be adversarially inserted to yield a corrupted word w. A code is said to be list-decodable if there is an (efficient) algorithm that, given w, reports a small list of codewords that include the original codeword x. Given delta and gamma we study what is the rate R for which there exists a constant q and list size L such that there exist codes of rate R correcting delta-fraction insertions and gamma-fraction deletions while reporting lists of size at most L. Using the concept of synchronization strings, introduced by the first two authors [Proc. STOC 2017], we show some surprising results. We show that for every 0 <= delta < 1, every 0 <= gamma < infty and every epsilon > 0 there exist codes of rate 1 - delta - epsilon and constant alphabet (so q = O_{delta,gamma,epsilon}(1)) and sub-logarithmic list sizes. Furthermore, our codes are accompanied by efficient (polynomial time) decoding algorithms. We stress that the fraction of insertions can be arbitrarily large (more than 100%), and the rate is independent of this parameter. We also prove several tight bounds on the parameters of list-decodable insdel codes. In particular, we show that the alphabet size of insdel codes needs to be exponentially large in epsilon^{-1}, where epsilon is the gap to capacity above. Our result even applies to settings where the unique-decoding capacity equals the list-decoding capacity and when it does so, it shows that the alphabet size needs to be exponentially large in the gap to capacity. This is sharp contrast to the Hamming error model where alphabet size polynomial in epsilon^{-1} suffices for unique decoding. This lower bound also shows that the exponential dependence on the alphabet size in previous works that constructed insdel codes is actually necessary! Our result sheds light on the remarkable asymmetry between the impact of insertions and deletions from the point of view of error-correction: Whereas deletions cost in the rate of the code, insertion costs are borne by the adversary and not the code! Our results also highlight the dominance of the model of insertions and deletions over the Hamming model: A Hamming error is equal to one insertion and one deletion (at the same location). Thus the effect of delta-fraction Hamming errors can be simulated by delta-fraction of deletions and delta-fraction of insertions - but insdel codes can deal with much more insertions without loss in rate (though at the price of higher alphabet size).

Cite as

Bernhard Haeupler, Amirbehshad Shahrasbi, and Madhu Sudan. Synchronization Strings: List Decoding for Insertions and Deletions. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 76:1-76:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{haeupler_et_al:LIPIcs.ICALP.2018.76,
  author =	{Haeupler, Bernhard and Shahrasbi, Amirbehshad and Sudan, Madhu},
  title =	{{Synchronization Strings: List Decoding for Insertions and Deletions}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{76:1--76:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.76},
  URN =		{urn:nbn:de:0030-drops-90807},
  doi =		{10.4230/LIPIcs.ICALP.2018.76},
  annote =	{Keywords: List Decoding, Insertions and Deletions, Synchronization Strings}
}
Document
Local Decoding and Testing of Polynomials over Grids

Authors: Srikanth Srinivasan and Madhu Sudan

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
The well-known DeMillo-Lipton-Schwartz-Zippel lemma says that n-variate polynomials of total degree at most d over grids, i.e. sets of the form A_1 \times A_2 \times \cdots \times A_n, form error-correcting codes (of distance at least 2^{-d} provided \min_i\{|A_i|\}\geq 2). In this work we explore their local decodability and local testability. While these aspects have been studied extensively when A_1 = \cdots = A_n = \F_q are the same finite field, the setting when A_i's are not the full field does not seem to have been explored before. In this work we focus on the case A_i = {0,1} for every i. We show that for every field (finite or otherwise) there is a test whose query complexity depends only on the degree (and not on the number of variables). In contrast we show that decodability is possible over fields of positive characteristic (with query complexity growing with the degree of the polynomial and the characteristic), but not over the reals, where the query complexity must grow with $n$. As a consequence we get a natural example of a code (one with a transitive group of symmetries) that is locally testable but not locally decodable. Classical results on local decoding and testing of polynomials have relied on the 2-transitive symmetries of the space of low-degree polynomials (under affine transformations). Grids do not possess this symmetry: So we introduce some new techniques to overcome this handicap and in particular use the hypercontractivity of the (constant weight) noise operator on the Hamming cube.

Cite as

Srikanth Srinivasan and Madhu Sudan. Local Decoding and Testing of Polynomials over Grids. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 26:1-26:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{srinivasan_et_al:LIPIcs.ITCS.2018.26,
  author =	{Srinivasan, Srikanth and Sudan, Madhu},
  title =	{{Local Decoding and Testing of Polynomials over Grids}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{26:1--26:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.26},
  URN =		{urn:nbn:de:0030-drops-83294},
  doi =		{10.4230/LIPIcs.ITCS.2018.26},
  annote =	{Keywords: Property testing, Coding theory, Low-degree testing, Local decoding, Local testing}
}
Document
Compression in a Distributed Setting

Authors: Badih Ghazi, Elad Haramaty, Pritish Kamath, and Madhu Sudan

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
Motivated by an attempt to understand the formation and development of (human) language, we introduce a "distributed compression" problem. In our problem a sequence of pairs of players from a set of K players are chosen and tasked to communicate messages drawn from an unknown distribution Q. Arguably languages are created and evolve to compress frequently occurring messages, and we focus on this aspect. The only knowledge that players have about the distribution Q is from previously drawn samples, but these samples differ from player to player. The only common knowledge between the players is restricted to a common prior distribution P and some constant number of bits of information (such as a learning algorithm). Letting T_epsilon denote the number of iterations it would take for a typical player to obtain an epsilon-approximation to Q in total variation distance, we ask whether T_epsilon iterations suffice to compress the messages down roughly to their entropy and give a partial positive answer. We show that a natural uniform algorithm can compress the communication down to an average cost per message of O(H(Q) + log (D(P || Q)) in tilde{O}(T_epsilon) iterations while allowing for O(epsilon)-error, where D(. || .) denotes the KL-divergence between distributions. For large divergences this compares favorably with the static algorithm that ignores all samples and compresses down to H(Q) + D(P || Q) bits, while not requiring T_epsilon * K iterations that it would take players to develop optimal but separate compressions for each pair of players. Along the way we introduce a "data-structural" view of the task of communicating with a natural language and show that our natural algorithm can also be implemented by an efficient data structure, whose storage is comparable to the storage requirements of Q and whose query complexity is comparable to the lengths of the message to be compressed. Our results give a plausible mathematical analogy to the mechanisms by which human languages get created and evolve, and in particular highlights the possibility of coordination towards a joint task (agreeing on a language) while engaging in distributed learning.

Cite as

Badih Ghazi, Elad Haramaty, Pritish Kamath, and Madhu Sudan. Compression in a Distributed Setting. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 19:1-19:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{ghazi_et_al:LIPIcs.ITCS.2017.19,
  author =	{Ghazi, Badih and Haramaty, Elad and Kamath, Pritish and Sudan, Madhu},
  title =	{{Compression in a Distributed Setting}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{19:1--19:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.19},
  URN =		{urn:nbn:de:0030-drops-81763},
  doi =		{10.4230/LIPIcs.ITCS.2017.19},
  annote =	{Keywords: Distributed Compression, Communication, Language Evolution, Isolating Hash Families}
}
Document
The Power of Shared Randomness in Uncertain Communication

Authors: Badih Ghazi and Madhu Sudan

Published in: LIPIcs, Volume 80, 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)


Abstract
In a recent work (Ghazi et al., SODA 2016), the authors with Komargodski and Kothari initiated the study of communication with contextual uncertainty, a setup aiming to understand how efficient communication is possible when the communicating parties imperfectly share a huge context. In this setting, Alice is given a function f and an input string x, and Bob is given a function g and an input string y. The pair (x,y) comes from a known distribution mu and f and g are guaranteed to be close under this distribution. Alice and Bob wish to compute g(x,y) with high probability. The lack of agreement between Alice and Bob on the function that is being computed captures the uncertainty in the context. The previous work showed that any problem with one-way communication complexity k in the standard model (i.e., without uncertainty, in other words, under the promise that f=g) has public-coin communication at most O(k(1+I)) bits in the uncertain case, where I is the mutual information between x and y. Moreover, a lower bound of Omega(sqrt{I}) bits on the public-coin uncertain communication was also shown. However, an important question that was left open is related to the power that public randomness brings to uncertain communication. Can Alice and Bob achieve efficient communication amid uncertainty without using public randomness? And how powerful are public-coin protocols in overcoming uncertainty? Motivated by these two questions: - We prove the first separation between private-coin uncertain communication and public-coin uncertain communication. Namely, we exhibit a function class for which the communication in the standard model and the public-coin uncertain communication are O(1) while the private-coin uncertain communication is a growing function of n (the length of the inputs). This lower bound (proved with respect to the uniform distribution) is in sharp contrast with the case of public-coin uncertain communication which was shown by the previous work to be within a constant factor from the certain communication. This lower bound also implies the first separation between public-coin uncertain communication and deterministic uncertain communication. Interestingly, we also show that if Alice and Bob imperfectly share a sequence of random bits (a setup weaker than public randomness), then achieving a constant blow-up in communication is still possible. - We improve the lower-bound of the previous work on public-coin uncertain communication. Namely, we exhibit a function class and a distribution (with mutual information I approx n) for which the one-way certain communication is k bits but the one-way public-coin uncertain communication is at least Omega(sqrt{k}*sqrt{I}) bits. Our proofs introduce new problems in the standard communication complexity model and prove lower bounds for these problems. Both the problems and the lower bound techniques may be of general interest.

Cite as

Badih Ghazi and Madhu Sudan. The Power of Shared Randomness in Uncertain Communication. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 49:1-49:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{ghazi_et_al:LIPIcs.ICALP.2017.49,
  author =	{Ghazi, Badih and Sudan, Madhu},
  title =	{{The Power of Shared Randomness in Uncertain Communication}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{49:1--49:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.49},
  URN =		{urn:nbn:de:0030-drops-74871},
  doi =		{10.4230/LIPIcs.ICALP.2017.49},
  annote =	{Keywords: randomness, uncertainty, communication, imperfectly shared randomness, lower bounds}
}
Document
List Decoding Group Homomorphisms Between Supersolvable Groups

Authors: Alan Guo and Madhu Sudan

Published in: LIPIcs, Volume 28, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)


Abstract
We show that the set of homomorphisms between two supersolvable groups can be locally list decoded up to the minimum distance of the code, extending the results of Dinur et al. (Proc. STOC 2008) who studied the case where the groups are abelian. Moreover, when specialized to the abelian case, our proof is more streamlined and gives a better constant in the exponent of the list size. The constant is improved from about 3.5 million to 105.

Cite as

Alan Guo and Madhu Sudan. List Decoding Group Homomorphisms Between Supersolvable Groups. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 737-747, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{guo_et_al:LIPIcs.APPROX-RANDOM.2014.737,
  author =	{Guo, Alan and Sudan, Madhu},
  title =	{{List Decoding Group Homomorphisms Between Supersolvable Groups}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{737--747},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.737},
  URN =		{urn:nbn:de:0030-drops-47359},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.737},
  annote =	{Keywords: Group theory, error-correcting codes, locally decodable codes}
}
Document
Invited Talk
Physical limits of Communication (Invited Talk)

Authors: Madhu Sudan

Published in: LIPIcs, Volume 13, IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)


Abstract
We describe recent work with Sanjeev Khanna (U.\ Penn.) where we explore potential axioms about the mechanics of information transmission with a view to understanding whether continuous signals can carry more information than analog signals.

Cite as

Madhu Sudan. Physical limits of Communication (Invited Talk). In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 4-5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{sudan:LIPIcs.FSTTCS.2011.4,
  author =	{Sudan, Madhu},
  title =	{{Physical limits of Communication}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{4--5},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.4},
  URN =		{urn:nbn:de:0030-drops-33392},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.4},
  annote =	{Keywords: Analog signals, information capacity, delays}
}
Document
Testing Linear-Invariant Non-Linear Properties

Authors: Arnab Bhattacharyya, Victor Chen, Madhu Sudan, and Ning Xie

Published in: LIPIcs, Volume 3, 26th International Symposium on Theoretical Aspects of Computer Science (2009)


Abstract
We consider the task of testing properties of Boolean functions that are invariant under linear transformations of the Boolean cube. Previous work in property testing, including the linearity test and the test for Reed-Muller codes, has mostly focused on such tasks for linear properties. The one exception is a test due to Green for {}``triangle freeness'': A function $f:\mathbb{F}_{2}^{n}\to\mathbb{F}_{2}$ satisfies this property if $f(x),f(y),f(x+y)$ do not all equal $1$, for any pair $x,y\in\mathbb{F}_{2}^{n}$. Here we extend this test to a more systematic study of testing for linear-invariant non-linear properties. We consider properties that are described by a single forbidden pattern (and its linear transformations), i.e., a property is given by $k$ points $v_{1},\ldots,v_{k}\in\mathbb{F}_{2}^{k}$ and $f:\mathbb{F}_{2}^{n}\to\mathbb{F}_{2}$ satisfies the property that if for all linear maps $L:\mathbb{F}_{2}^{k}\to\mathbb{F}_{2}^{n}$ it is the case that $f(L(v_{1})),\ldots,f(L(v_{k}))$ do not all equal $1$. We show that this property is testable if the underlying matroid specified by $v_{1},\ldots,v_{k}$ is a graphic matroid. This extends Green's result to an infinite class of new properties. Our techniques extend those of Green and in particular we establish a link between the notion of {}``1-complexity linear systems'' of Green and Tao, and graphic matroids, to derive the results.

Cite as

Arnab Bhattacharyya, Victor Chen, Madhu Sudan, and Ning Xie. Testing Linear-Invariant Non-Linear Properties. In 26th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 3, pp. 135-146, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{bhattacharyya_et_al:LIPIcs.STACS.2009.1823,
  author =	{Bhattacharyya, Arnab and Chen, Victor and Sudan, Madhu and Xie, Ning},
  title =	{{Testing Linear-Invariant Non-Linear Properties}},
  booktitle =	{26th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{135--146},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-09-5},
  ISSN =	{1868-8969},
  year =	{2009},
  volume =	{3},
  editor =	{Albers, Susanne and Marion, Jean-Yves},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2009.1823},
  URN =		{urn:nbn:de:0030-drops-18235},
  doi =		{10.4230/LIPIcs.STACS.2009.1823},
  annote =	{Keywords: }
}
Document
Linear, Semidefinite Programming and Randomization Methods for Combinatorial Optimization Problems (Dagstuhl Seminar 00041)

Authors: Klaus Jansen, Jose Rolim, and Madhu Sudan

Published in: Dagstuhl Seminar Reports. Dagstuhl Seminar Reports, Volume 1 (2021)


Abstract

Cite as

Klaus Jansen, Jose Rolim, and Madhu Sudan. Linear, Semidefinite Programming and Randomization Methods for Combinatorial Optimization Problems (Dagstuhl Seminar 00041). Dagstuhl Seminar Report 263, pp. 1-22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2000)


Copy BibTex To Clipboard

@TechReport{jansen_et_al:DagSemRep.263,
  author =	{Jansen, Klaus and Rolim, Jose and Sudan, Madhu},
  title =	{{Linear, Semidefinite Programming and Randomization Methods for Combinatorial Optimization Problems (Dagstuhl Seminar 00041)}},
  pages =	{1--22},
  ISSN =	{1619-0203},
  year =	{2000},
  type = 	{Dagstuhl Seminar Report},
  number =	{263},
  institution =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemRep.263},
  URN =		{urn:nbn:de:0030-drops-151486},
  doi =		{10.4230/DagSemRep.263},
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail