Search Results

Documents authored by Nakos, Vasileios


Document
Track A: Algorithms, Complexity and Games
Improved Sublinear-Time Edit Distance for Preprocessed Strings

Authors: Karl Bringmann, Alejandro Cassis, Nick Fischer, and Vasileios Nakos

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We study the problem of approximating the edit distance of two strings in sublinear time, in a setting where one or both string(s) are preprocessed, as initiated by Goldenberg, Rubinstein, Saha (STOC '20). Specifically, in the (k, K)-gap edit distance problem, the goal is to distinguish whether the edit distance of two strings is at most k or at least K. We obtain the following results: - After preprocessing one string in time n^{1+o(1)}, we can solve (k, k ⋅ n^o(1))-gap-gap edit distance in time (n/k + k) ⋅ n^o(1). - After preprocessing both strings separately in time n^{1+o(1)}, we can solve (k, k ⋅ n^o(1))-gap edit distance in time kn^o(1). Both results improve upon some previously best known result, with respect to either the gap or the query time or the preprocessing time. Our algorithms build on the framework by Andoni, Krauthgamer and Onak (FOCS '10) and the recent sublinear-time algorithm by Bringmann, Cassis, Fischer and Nakos (STOC '22). We replace many complicated parts in their algorithm by faster and simpler solutions which exploit the preprocessing.

Cite as

Karl Bringmann, Alejandro Cassis, Nick Fischer, and Vasileios Nakos. Improved Sublinear-Time Edit Distance for Preprocessed Strings. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 32:1-32:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2022.32,
  author =	{Bringmann, Karl and Cassis, Alejandro and Fischer, Nick and Nakos, Vasileios},
  title =	{{Improved Sublinear-Time Edit Distance for Preprocessed Strings}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{32:1--32:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.32},
  URN =		{urn:nbn:de:0030-drops-163734},
  doi =		{10.4230/LIPIcs.ICALP.2022.32},
  annote =	{Keywords: Edit Distance, Property Testing, Preprocessing, Precision Sampling}
}
Document
Track A: Algorithms, Complexity and Games
Fast n-Fold Boolean Convolution via Additive Combinatorics

Authors: Karl Bringmann and Vasileios Nakos

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
We consider the problem of computing the Boolean convolution (with wraparound) of n vectors of dimension m, or, equivalently, the problem of computing the sumset A₁+A₂+…+A_n for A₁,…,A_n ⊆ ℤ_m. Boolean convolution formalizes the frequent task of combining two subproblems, where the whole problem has a solution of size k if for some i the first subproblem has a solution of size i and the second subproblem has a solution of size k-i. Our problem formalizes a natural generalization, namely combining solutions of n subproblems subject to a modular constraint. This simultaneously generalises Modular Subset Sum and Boolean Convolution (Sumset Computation). Although nearly optimal algorithms are known for special cases of this problem, not even tiny improvements are known for the general case. We almost resolve the computational complexity of this problem, shaving essentially a factor of n from the running time of previous algorithms. Specifically, we present a deterministic algorithm running in almost linear time with respect to the input plus output size k. We also present a Las Vegas algorithm running in nearly linear expected time with respect to the input plus output size k. Previously, no deterministic or randomized o(nk) algorithm was known. At the heart of our approach lies a careful usage of Kneser’s theorem from Additive Combinatorics, and a new deterministic almost linear output-sensitive algorithm for non-negative sparse convolution. In total, our work builds a solid toolbox that could be of independent interest.

Cite as

Karl Bringmann and Vasileios Nakos. Fast n-Fold Boolean Convolution via Additive Combinatorics. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 41:1-41:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2021.41,
  author =	{Bringmann, Karl and Nakos, Vasileios},
  title =	{{Fast n-Fold Boolean Convolution via Additive Combinatorics}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{41:1--41:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.41},
  URN =		{urn:nbn:de:0030-drops-141108},
  doi =		{10.4230/LIPIcs.ICALP.2021.41},
  annote =	{Keywords: convolution, sumset computation, modular subset sum, output sensitive}
}
Document
Track A: Algorithms, Complexity and Games
On the Approximability of Multistage Min-Sum Set Cover

Authors: Dimitris Fotakis, Panagiotis Kostopanagiotis, Vasileios Nakos, Georgios Piliouras, and Stratis Skoulakis

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
We investigate the polynomial-time approximability of the multistage version of Min-Sum Set Cover (Mult-MSSC), a natural and intriguing generalization of the classical List Update problem. In Mult-MSSC, we maintain a sequence of permutations (π⁰, π¹, …, π^T) on n elements, based on a sequence of requests ℛ = (R¹, …, R^T). We aim to minimize the total cost of updating π^{t-1} to π^{t}, quantified by the Kendall tau distance d_{KT}(π^{t-1}, π^t), plus the total cost of covering each request R^t with the current permutation π^t, quantified by the position of the first element of R^t in π^t. Using a reduction from Set Cover, we show that Mult-MSSC does not admit an O(1)-approximation, unless P = NP, and that any o(log n) (resp. o(r)) approximation to Mult-MSSC implies a sublogarithmic (resp. o(r)) approximation to Set Cover (resp. where each element appears at most r times). Our main technical contribution is to show that Mult-MSSC can be approximated in polynomial-time within a factor of O(log² n) in general instances, by randomized rounding, and within a factor of O(r²), if all requests have cardinality at most r, by deterministic rounding.

Cite as

Dimitris Fotakis, Panagiotis Kostopanagiotis, Vasileios Nakos, Georgios Piliouras, and Stratis Skoulakis. On the Approximability of Multistage Min-Sum Set Cover. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 65:1-65:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{fotakis_et_al:LIPIcs.ICALP.2021.65,
  author =	{Fotakis, Dimitris and Kostopanagiotis, Panagiotis and Nakos, Vasileios and Piliouras, Georgios and Skoulakis, Stratis},
  title =	{{On the Approximability of Multistage Min-Sum Set Cover}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{65:1--65:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.65},
  URN =		{urn:nbn:de:0030-drops-141341},
  doi =		{10.4230/LIPIcs.ICALP.2021.65},
  annote =	{Keywords: Approximation Algorithms, Multistage Min-Sum Set Cover, Multistage Optimization Problems}
}
Document
Track A: Algorithms, Complexity and Games
Deterministic Sparse Fourier Transform with an 𝓁_{∞} Guarantee

Authors: Yi Li and Vasileios Nakos

Published in: LIPIcs, Volume 168, 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)


Abstract
In this paper we revisit the deterministic version of the Sparse Fourier Transform problem, which asks to read only a few entries of x ∈ ℂⁿ and design a recovery algorithm such that the output of the algorithm approximates x̂, the Discrete Fourier Transform (DFT) of x. The randomized case has been well-understood, while the main work in the deterministic case is that of Merhi et al. (J Fourier Anal Appl 2018), which obtains O(k² log^(-1) k ⋅ log^5.5 n) samples and a similar runtime with the 𝓁₂/𝓁₁ guarantee. We focus on the stronger 𝓁_∞/𝓁₁ guarantee and the closely related problem of incoherent matrices. We list our contributions as follows. 1) We find a deterministic collection of O(k² log n) samples for the 𝓁_∞/𝓁₁ recovery in time O(nk log² n), and a deterministic collection of O(k² log² n) samples for the 𝓁_∞/𝓁₁ sparse recovery in time O(k² log³n). 2) We give new deterministic constructions of incoherent matrices that are row-sampled submatrices of the DFT matrix, via a derandomization of Bernstein’s inequality and bounds on exponential sums considered in analytic number theory. Our first construction matches a previous randomized construction of Nelson, Nguyen and Woodruff (RANDOM'12), where there was no constraint on the form of the incoherent matrix. Our algorithms are nearly sample-optimal, since a lower bound of Ω(k² + k log n) is known, even for the case where the sensing matrix can be arbitrarily designed. A similar lower bound of Ω(k² log n/ log k) is known for incoherent matrices.

Cite as

Yi Li and Vasileios Nakos. Deterministic Sparse Fourier Transform with an 𝓁_{∞} Guarantee. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 77:1-77:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ICALP.2020.77,
  author =	{Li, Yi and Nakos, Vasileios},
  title =	{{Deterministic Sparse Fourier Transform with an 𝓁\underline\{∞\} Guarantee}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{77:1--77:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.77},
  URN =		{urn:nbn:de:0030-drops-124844},
  doi =		{10.4230/LIPIcs.ICALP.2020.77},
  annote =	{Keywords: Fourier sparse recovery, derandomization, incoherent matrices}
}
Document
Deterministic Heavy Hitters with Sublinear Query Time

Authors: Yi Li and Vasileios Nakos

Published in: LIPIcs, Volume 116, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)


Abstract
We study the classic problem of finding l_1 heavy hitters in the streaming model. In the general turnstile model, we give the first deterministic sublinear-time sketching algorithm which takes a linear sketch of length O(epsilon^{-2} log n * log^*(epsilon^{-1})), which is only a factor of log^*(epsilon^{-1}) more than the best existing polynomial-time sketching algorithm (Nelson et al., RANDOM '12). Our approach is based on an iterative procedure, where most unrecovered heavy hitters are identified in each iteration. Although this technique has been extensively employed in the related problem of sparse recovery, this is the first time, to the best of our knowledge, that it has been used in the context of heavy hitters. Along the way we also obtain a sublinear time algorithm for the closely related problem of the l_1/l_1 compressed sensing, matching the space usage of previous (super-)linear time algorithms. In the strict turnstile model, we show that the runtime can be improved and the sketching matrix can be made strongly explicit with O(epsilon^{-2}log^3 n/log^3(1/epsilon)) rows.

Cite as

Yi Li and Vasileios Nakos. Deterministic Heavy Hitters with Sublinear Query Time. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 116, pp. 18:1-18:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.APPROX-RANDOM.2018.18,
  author =	{Li, Yi and Nakos, Vasileios},
  title =	{{Deterministic Heavy Hitters with Sublinear Query Time}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)},
  pages =	{18:1--18:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-085-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{116},
  editor =	{Blais, Eric and Jansen, Klaus and D. P. Rolim, Jos\'{e} and Steurer, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2018.18},
  URN =		{urn:nbn:de:0030-drops-94221},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2018.18},
  annote =	{Keywords: heavy hitters, turnstile model, sketching algorithm, strongly explicit}
}
Document
On Low-Risk Heavy Hitters and Sparse Recovery Schemes

Authors: Yi Li, Vasileios Nakos, and David P. Woodruff

Published in: LIPIcs, Volume 116, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)


Abstract
We study the heavy hitters and related sparse recovery problems in the low failure probability regime. This regime is not well-understood, and the main previous work on this is by Gilbert et al. (ICALP'13). We recognize an error in their analysis, improve their results, and contribute new sparse recovery algorithms, as well as provide upper and lower bounds for the heavy hitters problem with low failure probability. Our results are summarized as follows: 1) (Heavy Hitters) We study three natural variants for finding heavy hitters in the strict turnstile model, where the variant depends on the quality of the desired output. For the weakest variant, we give a randomized algorithm improving the failure probability analysis of the ubiquitous Count-Min data structure. We also give a new lower bound for deterministic schemes, resolving a question about this variant posed in Question 4 in the IITK Workshop on Algorithms for Data Streams (2006). Under the strongest and well-studied l_{infty}/ l_2 variant, we show that the classical Count-Sketch data structure is optimal for very low failure probabilities, which was previously unknown. 2) (Sparse Recovery Algorithms) For non-adaptive sparse-recovery, we give sublinear-time algorithms with low-failure probability, which improve upon Gilbert et al. (ICALP'13). In the adaptive case, we improve the failure probability from a constant by Indyk et al. (FOCS '11) to e^{-k^{0.99}}, where k is the sparsity parameter. 3) (Optimal Average-Case Sparse Recovery Bounds) We give matching upper and lower bounds in all parameters, including the failure probability, for the measurement complexity of the l_2/l_2 sparse recovery problem in the spiked-covariance model, completely settling its complexity in this model.

Cite as

Yi Li, Vasileios Nakos, and David P. Woodruff. On Low-Risk Heavy Hitters and Sparse Recovery Schemes. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 116, pp. 19:1-19:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.APPROX-RANDOM.2018.19,
  author =	{Li, Yi and Nakos, Vasileios and Woodruff, David P.},
  title =	{{On Low-Risk Heavy Hitters and Sparse Recovery Schemes}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)},
  pages =	{19:1--19:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-085-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{116},
  editor =	{Blais, Eric and Jansen, Klaus and D. P. Rolim, Jos\'{e} and Steurer, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2018.19},
  URN =		{urn:nbn:de:0030-drops-94237},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2018.19},
  annote =	{Keywords: heavy hitters, sparse recovery, turnstile model, spike covariance model, lower bounds}
}
Document
Improved Algorithms for Adaptive Compressed Sensing

Authors: Vasileios Nakos, Xiaofei Shi, David P. Woodruff, and Hongyang Zhang

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
In the problem of adaptive compressed sensing, one wants to estimate an approximately k-sparse vector x in R^n from m linear measurements A_1 x, A_2 x,..., A_m x, where A_i can be chosen based on the outcomes A_1 x,..., A_{i-1} x of previous measurements. The goal is to output a vector x^ for which |x-x^|_p <=C * min_{k-sparse x'} |x-x'|_q, with probability at least 2/3, where C > 0 is an approximation factor. Indyk, Price and Woodruff (FOCS'11) gave an algorithm for p=q=2 for C = 1+epsilon with O((k/epsilon) loglog (n/k)) measurements and O(log^*(k) loglog (n)) rounds of adaptivity. We first improve their bounds, obtaining a scheme with O(k * loglog (n/k) + (k/epsilon) * loglog(1/epsilon)) measurements and O(log^*(k) loglog (n)) rounds, as well as a scheme with O((k/epsilon) * loglog (n log (n/k))) measurements and an optimal O(loglog (n)) rounds. We then provide novel adaptive compressed sensing schemes with improved bounds for (p,p) for every 0 < p < 2. We show that the improvement from O(k log(n/k)) measurements to O(k log log (n/k)) measurements in the adaptive setting can persist with a better epsilon-dependence for other values of p and q. For example, when (p,q) = (1,1), we obtain O(k/sqrt{epsilon} * log log n log^3 (1/epsilon)) measurements. We obtain nearly matching lower bounds, showing our algorithms are close to optimal. Along the way, we also obtain the first nearly-optimal bounds for (p,p) schemes for every 0 < p < 2 even in the non-adaptive setting.

Cite as

Vasileios Nakos, Xiaofei Shi, David P. Woodruff, and Hongyang Zhang. Improved Algorithms for Adaptive Compressed Sensing. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 90:1-90:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{nakos_et_al:LIPIcs.ICALP.2018.90,
  author =	{Nakos, Vasileios and Shi, Xiaofei and Woodruff, David P. and Zhang, Hongyang},
  title =	{{Improved Algorithms for Adaptive Compressed Sensing}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{90:1--90:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.90},
  URN =		{urn:nbn:de:0030-drops-90945},
  doi =		{10.4230/LIPIcs.ICALP.2018.90},
  annote =	{Keywords: Compressed Sensing, Adaptivity, High-Dimensional Vectors}
}
Document
On Fast Decoding of High-Dimensional Signals from One-Bit Measurements

Authors: Vasileios Nakos

Published in: LIPIcs, Volume 80, 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)


Abstract
In the problem of one-bit compressed sensing, the goal is to find a delta-close estimation of a k-sparse vector x in R^n given the signs of the entries of y = Phi x, where Phi is called the measurement matrix. For the one-bit compressed sensing problem, previous work [Plan, 2013][Gopi, 2013] achieved Theta (delta^{-2} k log(n/k)) and O~( 1/delta k log (n/k)) measurements, respectively, but the decoding time was Omega ( n k log (n/k)). In this paper, using tools and techniques developed in the context of two-stage group testing and streaming algorithms, we contribute towards the direction of sub-linear decoding time. We give a variety of schemes for the different versions of one-bit compressed sensing, such as the for-each and for-all versions, and for support recovery; all these have at most a log k overhead in the number of measurements and poly(k, log n) decoding time, which is an exponential improvement over previous work, in terms of the dependence on n.

Cite as

Vasileios Nakos. On Fast Decoding of High-Dimensional Signals from One-Bit Measurements. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 61:1-61:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{nakos:LIPIcs.ICALP.2017.61,
  author =	{Nakos, Vasileios},
  title =	{{On Fast Decoding of High-Dimensional Signals from One-Bit Measurements}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{61:1--61:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.61},
  URN =		{urn:nbn:de:0030-drops-74887},
  doi =		{10.4230/LIPIcs.ICALP.2017.61},
  annote =	{Keywords: one-bit compressed sensing, sparse recovery, heavy hitters, dyadic trick, combinatorial group testing}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail