6 Search Results for "Mazumdar, Arya"


Document
RANDOM
When Do Low-Rate Concatenated Codes Approach The Gilbert-Varshamov Bound?

Authors: Dean Doron, Jonathan Mosheiff, and Mary Wootters

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
The Gilbert-Varshamov (GV) bound is a classical existential result in coding theory. It implies that a random linear binary code of rate ε² has relative distance at least 1/2 - O(ε) with high probability. However, it is a major challenge to construct explicit codes with similar parameters. One hope to derandomize the Gilbert-Varshamov construction is with code concatenation: We begin with a (hopefully explicit) outer code 𝒞_out over a large alphabet, and concatenate that with a small binary random linear code 𝒞_in. It is known that when we use independent small codes for each coordinate, then the result lies on the GV bound with high probability, but this still uses a lot of randomness. In this paper, we consider the question of whether code concatenation with a single random linear inner code 𝒞_in can lie on the GV bound; and if so what conditions on 𝒞_out are sufficient for this. We show that first, there do exist linear outer codes 𝒞_out that are "good" for concatenation in this sense (in fact, most linear codes codes are good). We also provide two sufficient conditions for 𝒞_out, so that if 𝒞_out satisfies these, 𝒞_out∘𝒞_in will likely lie on the GV bound. We hope that these conditions may inspire future work towards constructing explicit codes 𝒞_out.

Cite as

Dean Doron, Jonathan Mosheiff, and Mary Wootters. When Do Low-Rate Concatenated Codes Approach The Gilbert-Varshamov Bound?. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 53:1-53:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{doron_et_al:LIPIcs.APPROX/RANDOM.2024.53,
  author =	{Doron, Dean and Mosheiff, Jonathan and Wootters, Mary},
  title =	{{When Do Low-Rate Concatenated Codes Approach The Gilbert-Varshamov Bound?}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{53:1--53:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.53},
  URN =		{urn:nbn:de:0030-drops-210467},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.53},
  annote =	{Keywords: Error-correcting codes, Concatenated codes, Derandomization, Gilbert-Varshamov bound}
}
Document
Support Recovery in Universal One-Bit Compressed Sensing

Authors: Arya Mazumdar and Soumyabrata Pal

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
One-bit compressed sensing (1bCS) is an extreme-quantized signal acquisition method that has been intermittently studied in the past decade. In 1bCS, linear samples of a high dimensional signal are quantized to only one bit per sample (sign of the measurement). The extreme quantization makes it an interesting case study of the more general single-index or generalized linear models. At the same time it can also be thought of as a "design" version of learning a binary linear classifier or halfspace-learning. Assuming the original signal vector to be sparse, existing results in 1bCS either aim to find the support of the vector, or approximate the signal within an ε-ball. The focus of this paper is support recovery, which often also computationally facilitate approximate signal recovery. A universal measurement matrix for 1bCS refers to one set of measurements that work for all sparse signals. With universality, it is known that Θ̃(k²) 1bCS measurements are necessary and sufficient for support recovery (where k denotes the sparsity). In this work, we show that it is possible to universally recover the support with a small number of false positives with Õ(k^{3/2}) measurements. If the dynamic range of the signal vector is known, then with a different technique, this result can be improved to only Õ(k) measurements. Other results on universal but approximate support recovery are also provided in this paper. All of our main recovery algorithms are simple and polynomial-time.

Cite as

Arya Mazumdar and Soumyabrata Pal. Support Recovery in Universal One-Bit Compressed Sensing. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 106:1-106:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{mazumdar_et_al:LIPIcs.ITCS.2022.106,
  author =	{Mazumdar, Arya and Pal, Soumyabrata},
  title =	{{Support Recovery in Universal One-Bit Compressed Sensing}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{106:1--106:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.106},
  URN =		{urn:nbn:de:0030-drops-157028},
  doi =		{10.4230/LIPIcs.ITCS.2022.106},
  annote =	{Keywords: Superset Recovery, Approximate Support Recovery, List union-free family, Descartes’ rule of signs}
}
Document
RANDOM
Connectivity of Random Annulus Graphs and the Geometric Block Model

Authors: Sainyam Galhotra, Arya Mazumdar, Soumyabrata Pal, and Barna Saha

Published in: LIPIcs, Volume 145, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)


Abstract
Random geometric graph (Gilbert, 1961) is a basic model of random graphs for spatial networks proposed shortly after the introduction of the Erdős-Rényi random graphs. The geometric block model (GBM) is a probabilistic model for community detection defined over random geometric graphs (RGG) similar in spirit to the popular stochastic block model which is defined over Erdős-Rényi random graphs. The GBM naturally inherits many desirable properties of RGGs such as transitivity ("friends having common friends') and has been shown to model many real-world networks better than the stochastic block model. Analyzing the properties of a GBM requires new tools and perspectives to handle correlation in edge formation. In this paper, we study the necessary and sufficient conditions for community recovery over GBM in the connectivity regime. We provide efficient algorithms that recover the communities exactly with high probability and match the lower bound within a small constant factor. This requires us to prove new connectivity results for vertex-random graphs or random annulus graphs which are natural generalizations of random geometric graphs. A vertex-random graph is a model of random graphs where the randomness lies in the vertices as opposed to an Erdős-Rényi random graph where the randomness lies in the edges. A vertex-random graph G(n, [r_1, r_2]), 0 <=r_1 <r_2 <=1 with n vertices is defined by assigning a real number in [0,1] randomly and uniformly to each vertices and adding an edge between two vertices if the "distance" between the corresponding two random numbers is between r_1 and r_2. For the special case of r_1=0, this corresponds to random geometric graph in one dimension. We can extend this model naturally to higher dimensions; these higher dimensional counterparts are referred to as random annulus graphs. Random annulus graphs appear naturally whenever the well-known Goldilocks principle ("not too close, not too far') holds in a network. In this paper, we study the connectivity properties of such graphs, providing both necessary and sufficient conditions. We show a surprising long edge phenomena for vertex-random graphs: the minimum gap for connectivity between r_1 and r_2 is significantly less when r_1 >0 vs when r_1=0 (RGG). We then extend the connectivity results to high dimensions. These results play a crucial role in analyzing the GBM.

Cite as

Sainyam Galhotra, Arya Mazumdar, Soumyabrata Pal, and Barna Saha. Connectivity of Random Annulus Graphs and the Geometric Block Model. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 145, pp. 53:1-53:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{galhotra_et_al:LIPIcs.APPROX-RANDOM.2019.53,
  author =	{Galhotra, Sainyam and Mazumdar, Arya and Pal, Soumyabrata and Saha, Barna},
  title =	{{Connectivity of Random Annulus Graphs and the Geometric Block Model}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)},
  pages =	{53:1--53:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-125-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{145},
  editor =	{Achlioptas, Dimitris and V\'{e}gh, L\'{a}szl\'{o} A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2019.53},
  URN =		{urn:nbn:de:0030-drops-112682},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2019.53},
  annote =	{Keywords: random graphs, geometric graphs, community detection, block model}
}
Document
Trace Reconstruction: Generalized and Parameterized

Authors: Akshay Krishnamurthy, Arya Mazumdar, Andrew McGregor, and Soumyabrata Pal

Published in: LIPIcs, Volume 144, 27th Annual European Symposium on Algorithms (ESA 2019)


Abstract
In the beautifully simple-to-state problem of trace reconstruction, the goal is to reconstruct an unknown binary string x given random "traces" of x where each trace is generated by deleting each coordinate of x independently with probability p<1. The problem is well studied both when the unknown string is arbitrary and when it is chosen uniformly at random. For both settings, there is still an exponential gap between upper and lower sample complexity bounds and our understanding of the problem is still surprisingly limited. In this paper, we consider natural parameterizations and generalizations of this problem in an effort to attain a deeper and more comprehensive understanding. Perhaps our most surprising results are: 1) We prove that exp(O(n^(1/4) sqrt{log n})) traces suffice for reconstructing arbitrary matrices. In the matrix version of the problem, each row and column of an unknown sqrt{n} x sqrt{n} matrix is deleted independently with probability p. Our results contrasts with the best known results for sequence reconstruction where the best known upper bound is exp(O(n^(1/3))). 2) An optimal result for random matrix reconstruction: we show that Theta(log n) traces are necessary and sufficient. This is in contrast to the problem for random sequences where there is a super-logarithmic lower bound and the best known upper bound is exp({O}(log^(1/3) n)). 3) We show that exp(O(k^(1/3) log^(2/3) n)) traces suffice to reconstruct k-sparse strings, providing an improvement over the best known sequence reconstruction results when k = o(n/log^2 n). 4) We show that poly(n) traces suffice if x is k-sparse and we additionally have a "separation" promise, specifically that the indices of 1’s in x all differ by Omega(k log n).

Cite as

Akshay Krishnamurthy, Arya Mazumdar, Andrew McGregor, and Soumyabrata Pal. Trace Reconstruction: Generalized and Parameterized. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 68:1-68:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{krishnamurthy_et_al:LIPIcs.ESA.2019.68,
  author =	{Krishnamurthy, Akshay and Mazumdar, Arya and McGregor, Andrew and Pal, Soumyabrata},
  title =	{{Trace Reconstruction: Generalized and Parameterized}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{68:1--68:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.68},
  URN =		{urn:nbn:de:0030-drops-111891},
  doi =		{10.4230/LIPIcs.ESA.2019.68},
  annote =	{Keywords: deletion channel, trace reconstruction, matrix reconstruction}
}
Document
Track A: Algorithms, Complexity and Games
Information-Theoretic and Algorithmic Thresholds for Group Testing

Authors: Amin Coja-Oghlan, Oliver Gebhard, Max Hahn-Klimroth, and Philipp Loick

Published in: LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)


Abstract
In the group testing problem we aim to identify a small number of infected individuals within a large population. We avail ourselves to a procedure that can test a group of multiple individuals, with the test result coming out positive iff at least one individual in the group is infected. With all tests conducted in parallel, what is the least number of tests required to identify the status of all individuals? In a recent test design [Aldridge et al. 2016] the individuals are assigned to test groups randomly, with every individual joining an equal number of groups. We pinpoint the sharp threshold for the number of tests required in this randomised design so that it is information-theoretically possible to infer the infection status of every individual. Moreover, we analyse two efficient inference algorithms. These results settle conjectures from [Aldridge et al. 2014, Johnson et al. 2019].

Cite as

Amin Coja-Oghlan, Oliver Gebhard, Max Hahn-Klimroth, and Philipp Loick. Information-Theoretic and Algorithmic Thresholds for Group Testing. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 43:1-43:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{cojaoghlan_et_al:LIPIcs.ICALP.2019.43,
  author =	{Coja-Oghlan, Amin and Gebhard, Oliver and Hahn-Klimroth, Max and Loick, Philipp},
  title =	{{Information-Theoretic and Algorithmic Thresholds for Group Testing}},
  booktitle =	{46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
  pages =	{43:1--43:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-109-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{132},
  editor =	{Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.43},
  URN =		{urn:nbn:de:0030-drops-106196},
  doi =		{10.4230/LIPIcs.ICALP.2019.43},
  annote =	{Keywords: Group testing problem, phase transitions, information theory, efficient algorithms, sharp threshold, Bayesian inference}
}
Document
Coding Theory for Inference, Learning and Optimization (Dagstuhl Seminar 18112)

Authors: Po-Ling Loh, Arya Mazumdar, Dimitris Papailiopoulos, and Rüdiger Urbanke

Published in: Dagstuhl Reports, Volume 8, Issue 3 (2018)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 18112, "Coding Theory for Inference, Learning and Optimization." Coding theory has recently found new applications in areas such as distributed machine learning, dimension reduction, and variety of statistical problems involving estimation and inference. In machine learning applications that use large-scale data, it is desirable to communicate the results of distributed computations in an efficient and robust manner. In dimension reduction applications, the pseudorandom properties of algebraic codes may be used to construct projection matrices that are deterministic and facilitate algorithmic efficiency. Finally, relationships that have been forged between coding theory and problems in theoretical computer science, such as k-SAT or the planted clique problem, lead to a new interpretation of the sharp thresholds encountered in these settings in terms of thresholds in channel coding theory. The aim of this Dagstuhl Seminar was to draw together researchers from industry and academia that are working in coding theory, particularly in these different (and somewhat disparate) application areas of machine learning and inference. The discussions and collaborations facilitated by this seminar were intended to spark new ideas about how coding theory may be used to improve and inform modern techniques for data analytics.

Cite as

Po-Ling Loh, Arya Mazumdar, Dimitris Papailiopoulos, and Rüdiger Urbanke. Coding Theory for Inference, Learning and Optimization (Dagstuhl Seminar 18112). In Dagstuhl Reports, Volume 8, Issue 3, pp. 60-73, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@Article{loh_et_al:DagRep.8.3.60,
  author =	{Loh, Po-Ling and Mazumdar, Arya and Papailiopoulos, Dimitris and Urbanke, R\"{u}diger},
  title =	{{Coding Theory for Inference, Learning and Optimization (Dagstuhl Seminar 18112)}},
  pages =	{60--73},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2018},
  volume =	{8},
  number =	{3},
  editor =	{Loh, Po-Ling and Mazumdar, Arya and Papailiopoulos, Dimitris and Urbanke, R\"{u}diger},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.8.3.60},
  URN =		{urn:nbn:de:0030-drops-92977},
  doi =		{10.4230/DagRep.8.3.60},
  annote =	{Keywords: Coding theory, Distributed optimization, Machine learning, Threshold phenomena}
}
  • Refine by Author
  • 4 Mazumdar, Arya
  • 3 Pal, Soumyabrata
  • 1 Coja-Oghlan, Amin
  • 1 Doron, Dean
  • 1 Galhotra, Sainyam
  • Show More...

  • Refine by Classification

  • Refine by Keyword
  • 1 Approximate Support Recovery
  • 1 Bayesian inference
  • 1 Coding theory
  • 1 Concatenated codes
  • 1 Derandomization
  • Show More...

  • Refine by Type
  • 6 document

  • Refine by Publication Year
  • 3 2019
  • 1 2018
  • 1 2022
  • 1 2024

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail