LIPIcs.ICALP.2018.90.pdf
- Filesize: 0.56 MB
- 14 pages
In the problem of adaptive compressed sensing, one wants to estimate an approximately k-sparse vector x in R^n from m linear measurements A_1 x, A_2 x,..., A_m x, where A_i can be chosen based on the outcomes A_1 x,..., A_{i-1} x of previous measurements. The goal is to output a vector x^ for which |x-x^|_p <=C * min_{k-sparse x'} |x-x'|_q, with probability at least 2/3, where C > 0 is an approximation factor. Indyk, Price and Woodruff (FOCS'11) gave an algorithm for p=q=2 for C = 1+epsilon with O((k/epsilon) loglog (n/k)) measurements and O(log^*(k) loglog (n)) rounds of adaptivity. We first improve their bounds, obtaining a scheme with O(k * loglog (n/k) + (k/epsilon) * loglog(1/epsilon)) measurements and O(log^*(k) loglog (n)) rounds, as well as a scheme with O((k/epsilon) * loglog (n log (n/k))) measurements and an optimal O(loglog (n)) rounds. We then provide novel adaptive compressed sensing schemes with improved bounds for (p,p) for every 0 < p < 2. We show that the improvement from O(k log(n/k)) measurements to O(k log log (n/k)) measurements in the adaptive setting can persist with a better epsilon-dependence for other values of p and q. For example, when (p,q) = (1,1), we obtain O(k/sqrt{epsilon} * log log n log^3 (1/epsilon)) measurements. We obtain nearly matching lower bounds, showing our algorithms are close to optimal. Along the way, we also obtain the first nearly-optimal bounds for (p,p) schemes for every 0 < p < 2 even in the non-adaptive setting.
Feedback for Dagstuhl Publishing