Search Results

Documents authored by Grigorescu, Elena


Document
APPROX
Approximation Algorithms for Directed Weighted Spanners

Authors: Elena Grigorescu, Nithish Kumar, and Young-San Lin

Published in: LIPIcs, Volume 275, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)


Abstract
In the pairwise weighted spanner problem, the input consists of a weighted directed graph on n vertices, where each edge is assigned both a cost and a length. Furthermore, we are given k terminal vertex pairs and a distance constraint for each pair. The goal is to find a minimum-cost subgraph in which the distance constraints are satisfied. We study the weighted spanner problem, in which the edges have positive integral lengths of magnitudes that are polynomial in n, while the costs are arbitrary non-negative rational numbers. Our results include the following in the classical offline setting: - An Õ(n^{4/5 + ε})-approximation algorithm for the weighted pairwise spanner problem. When the edges have unit costs and lengths, the best previous algorithm gives an Õ(n^{3/5 + ε})-approximation, due to Chlamtáč, Dinitz, Kortsarz, and Laekhanukit (Transactions on Algorithms, 2020). - An Õ(n^{1/2+ε})-approximation algorithm for the weighted spanner problem when the terminal pairs consist of all vertex pairs and the distances must be preserved exactly. When the edges have unit costs and arbitrary positive lengths, the best previous algorithm gives an Õ(n^{1/2})-approximation for the all-pair spanner problem, due to Berman, Bhattacharyya, Makarychev, Raskhodnikova, and Yaroslavtsev (Information and Computation, 2013). We also prove the first results for the weighted spanners in the online setting. Our results include the following: - An Õ(k^{1/2 + ε})-competitive algorithm for the online weighted pairwise spanner problem. The state-of-the-art results are an Õ(n^{4/5})-competitive algorithm when edges have unit costs and arbitrary positive lengths, and a min{Õ(k^{1/2 + ε}), Õ(n^{2/3 + ε})}-competitive algorithm when edges have unit costs and lengths, due to Grigorescu, Lin, and Quanrud (APPROX, 2021). - An Õ(k^ε)-competitive algorithm for the online weighted single-source (or single-sink) spanner problem. Without distance constraints, this problem is equivalent to the online directed Steiner tree problem. The best previous algorithm for online directed Steiner trees is an Õ(k^ε)-competitive algorithm, due to Chakrabarty, Ene, Krishnaswamy, and Panigrahi (SICOMP, 2018). Our online results also imply efficient approximation algorithms for the corresponding offline problems. To the best of our knowledge, these are the first approximation (online) polynomial-time algorithms with sublinear approximation (competitive) ratios for the weighted spanner problems.

Cite as

Elena Grigorescu, Nithish Kumar, and Young-San Lin. Approximation Algorithms for Directed Weighted Spanners. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 275, pp. 8:1-8:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{grigorescu_et_al:LIPIcs.APPROX/RANDOM.2023.8,
  author =	{Grigorescu, Elena and Kumar, Nithish and Lin, Young-San},
  title =	{{Approximation Algorithms for Directed Weighted Spanners}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)},
  pages =	{8:1--8:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-296-9},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{275},
  editor =	{Megow, Nicole and Smith, Adam},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2023.8},
  URN =		{urn:nbn:de:0030-drops-188335},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2023.8},
  annote =	{Keywords: directed weighted spanners, linear programming, junction tree}
}
Document
RANDOM
How to Make Your Approximation Algorithm Private: A Black-Box Differentially-Private Transformation for Tunable Approximation Algorithms of Functions with Low Sensitivity

Authors: Jeremiah Blocki, Elena Grigorescu, Tamalika Mukherjee, and Samson Zhou

Published in: LIPIcs, Volume 275, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)


Abstract
We develop a framework for efficiently transforming certain approximation algorithms into differentially-private variants, in a black-box manner. Specifically, our results focus on algorithms A that output an approximation to a function f of the form (1-α)f(x)-κ ≤ A(x) ≤ (1+α)f(x)+κ, where κ ∈ ℝ_{≥ 0} denotes additive error and α ∈ [0,1) denotes multiplicative error can be"tuned" to small-enough values while incurring only a polynomial blowup in the running time/space. We show that such algorithms can be made differentially private without sacrificing accuracy, as long as the function f has small "global sensitivity". We achieve these results by applying the "smooth sensitivity" framework developed by Nissim, Raskhodnikova, and Smith (STOC 2007). Our framework naturally applies to transform non-private FPRAS and FPTAS algorithms into ε-differentially private approximation algorithms where the former case requires an additional postprocessing step. We apply our framework in the context of sublinear-time and sublinear-space algorithms, while preserving the nature of the algorithm in meaningful ranges of the parameters. Our results include the first (to the best of our knowledge) ε-edge differentially-private sublinear-time algorithm for estimating the number of triangles, the number of connected components, and the weight of a minimum spanning tree of a graph whose accuracy holds with high probability. In the area of streaming algorithms, our results include ε-DP algorithms for estimating L_p-norms, distinct elements, and weighted minimum spanning tree for both insertion-only and turnstile streams. Our transformation also provides a private version of the smooth histogram framework, which is commonly used for converting streaming algorithms into sliding window variants, and achieves a multiplicative approximation to many problems, such as estimating L_p-norms, distinct elements, and the length of the longest increasing subsequence.

Cite as

Jeremiah Blocki, Elena Grigorescu, Tamalika Mukherjee, and Samson Zhou. How to Make Your Approximation Algorithm Private: A Black-Box Differentially-Private Transformation for Tunable Approximation Algorithms of Functions with Low Sensitivity. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 275, pp. 59:1-59:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{blocki_et_al:LIPIcs.APPROX/RANDOM.2023.59,
  author =	{Blocki, Jeremiah and Grigorescu, Elena and Mukherjee, Tamalika and Zhou, Samson},
  title =	{{How to Make Your Approximation Algorithm Private: A Black-Box Differentially-Private Transformation for Tunable Approximation Algorithms of Functions with Low Sensitivity}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)},
  pages =	{59:1--59:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-296-9},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{275},
  editor =	{Megow, Nicole and Smith, Adam},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2023.59},
  URN =		{urn:nbn:de:0030-drops-188849},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2023.59},
  annote =	{Keywords: Differential privacy, approximation algorithms}
}
Document
On Relaxed Locally Decodable Codes for Hamming and Insertion-Deletion Errors

Authors: Alexander R. Block, Jeremiah Blocki, Kuan Cheng, Elena Grigorescu, Xin Li, Yu Zheng, and Minshen Zhu

Published in: LIPIcs, Volume 264, 38th Computational Complexity Conference (CCC 2023)


Abstract
Locally Decodable Codes (LDCs) are error-correcting codes C:Σⁿ → Σ^m, encoding messages in Σⁿ to codewords in Σ^m, with super-fast decoding algorithms. They are important mathematical objects in many areas of theoretical computer science, yet the best constructions so far have codeword length m that is super-polynomial in n, for codes with constant query complexity and constant alphabet size. In a very surprising result, Ben-Sasson, Goldreich, Harsha, Sudan, and Vadhan (SICOMP 2006) show how to construct a relaxed version of LDCs (RLDCs) with constant query complexity and almost linear codeword length over the binary alphabet, and used them to obtain significantly-improved constructions of Probabilistically Checkable Proofs. In this work, we study RLDCs in the standard Hamming-error setting, and introduce their variants in the insertion and deletion (Insdel) error setting. Standard LDCs for Insdel errors were first studied by Ostrovsky and Paskin-Cherniavsky (Information Theoretic Security, 2015), and are further motivated by recent advances in DNA random access bio-technologies. Our first result is an exponential lower bound on the length of Hamming RLDCs making 2 queries (even adaptively), over the binary alphabet. This answers a question explicitly raised by Gur and Lachish (SICOMP 2021) and is the first exponential lower bound for RLDCs. Combined with the results of Ben-Sasson et al., our result exhibits a "phase-transition"-type behavior on the codeword length for some constant-query complexity. We achieve these lower bounds via a transformation of RLDCs to standard Hamming LDCs, using a careful analysis of restrictions of message bits that fix codeword bits. We further define two variants of RLDCs in the Insdel-error setting, a weak and a strong version. On the one hand, we construct weak Insdel RLDCs with almost linear codeword length and constant query complexity, matching the parameters of the Hamming variants. On the other hand, we prove exponential lower bounds for strong Insdel RLDCs. These results demonstrate that, while these variants are equivalent in the Hamming setting, they are significantly different in the insdel setting. Our results also prove a strict separation between Hamming RLDCs and Insdel RLDCs.

Cite as

Alexander R. Block, Jeremiah Blocki, Kuan Cheng, Elena Grigorescu, Xin Li, Yu Zheng, and Minshen Zhu. On Relaxed Locally Decodable Codes for Hamming and Insertion-Deletion Errors. In 38th Computational Complexity Conference (CCC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 264, pp. 14:1-14:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{block_et_al:LIPIcs.CCC.2023.14,
  author =	{Block, Alexander R. and Blocki, Jeremiah and Cheng, Kuan and Grigorescu, Elena and Li, Xin and Zheng, Yu and Zhu, Minshen},
  title =	{{On Relaxed Locally Decodable Codes for Hamming and Insertion-Deletion Errors}},
  booktitle =	{38th Computational Complexity Conference (CCC 2023)},
  pages =	{14:1--14:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-282-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{264},
  editor =	{Ta-Shma, Amnon},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2023.14},
  URN =		{urn:nbn:de:0030-drops-182847},
  doi =		{10.4230/LIPIcs.CCC.2023.14},
  annote =	{Keywords: Relaxed Locally Decodable Codes, Hamming Errors, Insdel Errors, Lower Bounds}
}
Document
Track A: Algorithms, Complexity and Games
Privately Estimating Graph Parameters in Sublinear Time

Authors: Jeremiah Blocki, Elena Grigorescu, and Tamalika Mukherjee

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We initiate a systematic study of algorithms that are both differentially-private and run in sublinear time for several problems in which the goal is to estimate natural graph parameters. Our main result is a differentially-private (1+ρ)-approximation algorithm for the problem of computing the average degree of a graph, for every ρ > 0. The running time of the algorithm is roughly the same (for sparse graphs) as its non-private version proposed by Goldreich and Ron (Sublinear Algorithms, 2005). We also obtain the first differentially-private sublinear-time approximation algorithms for the maximum matching size and the minimum vertex cover size of a graph. An overarching technique we employ is the notion of coupled global sensitivity of randomized algorithms. Related variants of this notion of sensitivity have been used in the literature in ad-hoc ways. Here we formalize the notion and develop it as a unifying framework for privacy analysis of randomized approximation algorithms.

Cite as

Jeremiah Blocki, Elena Grigorescu, and Tamalika Mukherjee. Privately Estimating Graph Parameters in Sublinear Time. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 26:1-26:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{blocki_et_al:LIPIcs.ICALP.2022.26,
  author =	{Blocki, Jeremiah and Grigorescu, Elena and Mukherjee, Tamalika},
  title =	{{Privately Estimating Graph Parameters in Sublinear Time}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{26:1--26:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.26},
  URN =		{urn:nbn:de:0030-drops-163674},
  doi =		{10.4230/LIPIcs.ICALP.2022.26},
  annote =	{Keywords: differential privacy, sublinear time, graph algorithms}
}
Document
APPROX
Online Directed Spanners and Steiner Forests

Authors: Elena Grigorescu, Young-San Lin, and Kent Quanrud

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
We present online algorithms for directed spanners and directed Steiner forests. These are well-studied network connectivity problems that fall under the unifying framework of online covering and packing linear programming formulations. This framework was developed in the seminal work of Buchbinder and Naor (Mathematics of Operations Research, 34, 2009) and is based on primal-dual techniques. Specifically, our results include the following: - For the pairwise spanner problem, in which the pairs of vertices to be spanned arrive online, we present an efficient randomized algorithm with competitive ratio Õ(n^{4/5}) for graphs with general edge lengths, where n is the number of vertices of the given graph. For graphs with uniform edge lengths, we give an efficient randomized algorithm with competitive ratio Õ(n^{2/3+ε}), and an efficient deterministic algorithm with competitive ratio Õ(k^{1/2+ε}), where k is the number of terminal pairs. To the best of our knowledge, these are the first online algorithms for directed spanners. In the offline version, the current best approximation ratio for uniform edge lengths is Õ(n^{3/5 + ε}), due to Chlamt{á}č, Dinitz, Kortsarz, and Laekhanukit (SODA 2017, TALG 2020). - For the directed Steiner forest problem with uniform costs, in which the pairs of vertices to be connected arrive online, we present an efficient randomized algorithm with competitive ratio Õ(n^{2/3 + ε}). The state-of-the-art online algorithm for general costs is due to Chakrabarty, Ene, Krishnaswamy, and Panigrahi (SICOMP 2018) and is Õ(k^{1/2 + ε})-competitive. In the offline version, the current best approximation ratio with uniform costs is Õ(n^{26/45 + ε}), due to Abboud and Bodwin (SODA 2018). To obtain efficient and competitive online algorithms, we observe that a small modification of the online covering and packing framework by Buchbinder and Naor implies a polynomial-time implementation of the primal-dual approach with separation oracles, which a priori might perform exponentially many calls to the oracle. We convert the online spanner problem into an online covering problem and complete the rounding-step analysis in a problem-specific fashion.

Cite as

Elena Grigorescu, Young-San Lin, and Kent Quanrud. Online Directed Spanners and Steiner Forests. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 5:1-5:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{grigorescu_et_al:LIPIcs.APPROX/RANDOM.2021.5,
  author =	{Grigorescu, Elena and Lin, Young-San and Quanrud, Kent},
  title =	{{Online Directed Spanners and Steiner Forests}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{5:1--5:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.5},
  URN =		{urn:nbn:de:0030-drops-146987},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.5},
  annote =	{Keywords: online directed pairwise spanners, online directed Steiner forests, online covering/packing linear programming, primal-dual approach}
}
Document
Fixed-Parameter Algorithms for Longest Heapable Subsequence and Maximum Binary Tree

Authors: Karthekeyan Chandrasekaran, Elena Grigorescu, Gabriel Istrate, Shubhang Kulkarni, Young-San Lin, and Minshen Zhu

Published in: LIPIcs, Volume 180, 15th International Symposium on Parameterized and Exact Computation (IPEC 2020)


Abstract
A heapable sequence is a sequence of numbers that can be arranged in a min-heap data structure. Finding a longest heapable subsequence of a given sequence was proposed by Byers, Heeringa, Mitzenmacher, and Zervas (ANALCO 2011) as a generalization of the well-studied longest increasing subsequence problem and its complexity still remains open. An equivalent formulation of the longest heapable subsequence problem is that of finding a maximum-sized binary tree in a given permutation directed acyclic graph (permutation DAG). In this work, we study parameterized algorithms for both longest heapable subsequence and maximum-sized binary tree. We introduce alphabet size as a new parameter in the study of computational problems in permutation DAGs and show that this parameter with respect to a fixed topological ordering admits a complete characterization and a polynomial time algorithm. We believe that this parameter is likely to be useful in the context of optimization problems defined over permutation DAGs.

Cite as

Karthekeyan Chandrasekaran, Elena Grigorescu, Gabriel Istrate, Shubhang Kulkarni, Young-San Lin, and Minshen Zhu. Fixed-Parameter Algorithms for Longest Heapable Subsequence and Maximum Binary Tree. In 15th International Symposium on Parameterized and Exact Computation (IPEC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 180, pp. 7:1-7:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chandrasekaran_et_al:LIPIcs.IPEC.2020.7,
  author =	{Chandrasekaran, Karthekeyan and Grigorescu, Elena and Istrate, Gabriel and Kulkarni, Shubhang and Lin, Young-San and Zhu, Minshen},
  title =	{{Fixed-Parameter Algorithms for Longest Heapable Subsequence and Maximum Binary Tree}},
  booktitle =	{15th International Symposium on Parameterized and Exact Computation (IPEC 2020)},
  pages =	{7:1--7:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-172-6},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{180},
  editor =	{Cao, Yixin and Pilipczuk, Marcin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2020.7},
  URN =		{urn:nbn:de:0030-drops-133102},
  doi =		{10.4230/LIPIcs.IPEC.2020.7},
  annote =	{Keywords: maximum binary tree, heapability, permutation directed acyclic graphs}
}
Document
Locally Decodable/Correctable Codes for Insertions and Deletions

Authors: Alexander R. Block, Jeremiah Blocki, Elena Grigorescu, Shubhang Kulkarni, and Minshen Zhu

Published in: LIPIcs, Volume 182, 40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020)


Abstract
Recent efforts in coding theory have focused on building codes for insertions and deletions, called insdel codes, with optimal trade-offs between their redundancy and their error-correction capabilities, as well as efficient encoding and decoding algorithms. In many applications, polynomial running time may still be prohibitively expensive, which has motivated the study of codes with super-efficient decoding algorithms. These have led to the well-studied notions of Locally Decodable Codes (LDCs) and Locally Correctable Codes (LCCs). Inspired by these notions, Ostrovsky and Paskin-Cherniavsky (Information Theoretic Security, 2015) generalized Hamming LDCs to insertions and deletions. To the best of our knowledge, these are the only known results that study the analogues of Hamming LDCs in channels performing insertions and deletions. Here we continue the study of insdel codes that admit local algorithms. Specifically, we reprove the results of Ostrovsky and Paskin-Cherniavsky for insdel LDCs using a different set of techniques. We also observe that the techniques extend to constructions of LCCs. Specifically, we obtain insdel LDCs and LCCs from their Hamming LDCs and LCCs analogues, respectively. The rate and error-correction capability blow up only by a constant factor, while the query complexity blows up by a poly log factor in the block length. Since insdel locally decodable/correctble codes are scarcely studied in the literature, we believe our results and techniques may lead to further research. In particular, we conjecture that constant-query insdel LDCs/LCCs do not exist.

Cite as

Alexander R. Block, Jeremiah Blocki, Elena Grigorescu, Shubhang Kulkarni, and Minshen Zhu. Locally Decodable/Correctable Codes for Insertions and Deletions. In 40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 182, pp. 16:1-16:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{block_et_al:LIPIcs.FSTTCS.2020.16,
  author =	{Block, Alexander R. and Blocki, Jeremiah and Grigorescu, Elena and Kulkarni, Shubhang and Zhu, Minshen},
  title =	{{Locally Decodable/Correctable Codes for Insertions and Deletions}},
  booktitle =	{40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020)},
  pages =	{16:1--16:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-174-0},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{182},
  editor =	{Saxena, Nitin and Simon, Sunil},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2020.16},
  URN =		{urn:nbn:de:0030-drops-132577},
  doi =		{10.4230/LIPIcs.FSTTCS.2020.16},
  annote =	{Keywords: Locally decodable/correctable codes, insert-delete channel}
}
Document
The Maximum Binary Tree Problem

Authors: Karthekeyan Chandrasekaran, Elena Grigorescu, Gabriel Istrate, Shubhang Kulkarni, Young-San Lin, and Minshen Zhu

Published in: LIPIcs, Volume 173, 28th Annual European Symposium on Algorithms (ESA 2020)


Abstract
We introduce and investigate the approximability of the maximum binary tree problem (MBT) in directed and undirected graphs. The goal in MBT is to find a maximum-sized binary tree in a given graph. MBT is a natural variant of the well-studied longest path problem, since both can be viewed as finding a maximum-sized tree of bounded degree in a given graph. The connection to longest path motivates the study of MBT in directed acyclic graphs (DAGs), since the longest path problem is solvable efficiently in DAGs. In contrast, we show that MBT in DAGs is in fact hard: it has no efficient exp(-O(log n/ log log n))-approximation algorithm under the exponential time hypothesis, where n is the number of vertices in the input graph. In undirected graphs, we show that MBT has no efficient exp(-O(log^0.63 n))-approximation under the exponential time hypothesis. Our inapproximability results rely on self-improving reductions and structural properties of binary trees. We also show constant-factor inapproximability assuming P ≠ NP. In addition to inapproximability results, we present algorithmic results along two different flavors: (1) We design a randomized algorithm to verify if a given directed graph on n vertices contains a binary tree of size k in 2^k poly(n) time. (2) Motivated by the longest heapable subsequence problem, introduced by Byers, Heeringa, Mitzenmacher, and Zervas, ANALCO 2011, which is equivalent to MBT in permutation DAGs, we design efficient algorithms for MBT in bipartite permutation graphs.

Cite as

Karthekeyan Chandrasekaran, Elena Grigorescu, Gabriel Istrate, Shubhang Kulkarni, Young-San Lin, and Minshen Zhu. The Maximum Binary Tree Problem. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 30:1-30:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chandrasekaran_et_al:LIPIcs.ESA.2020.30,
  author =	{Chandrasekaran, Karthekeyan and Grigorescu, Elena and Istrate, Gabriel and Kulkarni, Shubhang and Lin, Young-San and Zhu, Minshen},
  title =	{{The Maximum Binary Tree Problem}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{30:1--30:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.30},
  URN =		{urn:nbn:de:0030-drops-128967},
  doi =		{10.4230/LIPIcs.ESA.2020.30},
  annote =	{Keywords: maximum binary tree, heapability, inapproximability, fixed-parameter tractability}
}
Document
Nearly Optimal Distinct Elements and Heavy Hitters on Sliding Windows

Authors: Vladimir Braverman, Elena Grigorescu, Harry Lang, David P. Woodruff, and Samson Zhou

Published in: LIPIcs, Volume 116, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)


Abstract
We study the distinct elements and l_p-heavy hitters problems in the sliding window model, where only the most recent n elements in the data stream form the underlying set. We first introduce the composable histogram, a simple twist on the exponential (Datar et al., SODA 2002) and smooth histograms (Braverman and Ostrovsky, FOCS 2007) that may be of independent interest. We then show that the composable histogram{} along with a careful combination of existing techniques to track either the identity or frequency of a few specific items suffices to obtain algorithms for both distinct elements and l_p-heavy hitters that are nearly optimal in both n and epsilon. Applying our new composable histogram framework, we provide an algorithm that outputs a (1+epsilon)-approximation to the number of distinct elements in the sliding window model and uses O{1/(epsilon^2) log n log (1/epsilon)log log n+ (1/epsilon) log^2 n} bits of space. For l_p-heavy hitters, we provide an algorithm using space O{(1/epsilon^p) log^2 n (log^2 log n+log 1/epsilon)} for 0<p <=2, improving upon the best-known algorithm for l_2-heavy hitters (Braverman et al., COCOON 2014), which has space complexity O{1/epsilon^4 log^3 n}. We also show complementing nearly optimal lower bounds of Omega ((1/epsilon) log^2 n+(1/epsilon^2) log n) for distinct elements and Omega ((1/epsilon^p) log^2 n) for l_p-heavy hitters, both tight up to O{log log n} and O{log 1/epsilon} factors.

Cite as

Vladimir Braverman, Elena Grigorescu, Harry Lang, David P. Woodruff, and Samson Zhou. Nearly Optimal Distinct Elements and Heavy Hitters on Sliding Windows. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 116, pp. 7:1-7:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{braverman_et_al:LIPIcs.APPROX-RANDOM.2018.7,
  author =	{Braverman, Vladimir and Grigorescu, Elena and Lang, Harry and Woodruff, David P. and Zhou, Samson},
  title =	{{Nearly Optimal Distinct Elements and Heavy Hitters on Sliding Windows}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)},
  pages =	{7:1--7:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-085-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{116},
  editor =	{Blais, Eric and Jansen, Klaus and D. P. Rolim, Jos\'{e} and Steurer, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2018.7},
  URN =		{urn:nbn:de:0030-drops-94118},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2018.7},
  annote =	{Keywords: Streaming algorithms, sliding windows, heavy hitters, distinct elements}
}
Document
Flipping out with Many Flips: Hardness of Testing k-Monotonicity

Authors: Elena Grigorescu, Akash Kumar, and Karl Wimmer

Published in: LIPIcs, Volume 116, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)


Abstract
A function f:{0,1}^n - > {0,1} is said to be k-monotone if it flips between 0 and 1 at most k times on every ascending chain. Such functions represent a natural generalization of (1-)monotone functions, and have been recently studied in circuit complexity, PAC learning, and cryptography. Our work is part of a renewed focus in understanding testability of properties characterized by freeness of arbitrary order patterns as a generalization of monotonicity. Recently, Canonne et al. (ITCS 2017) initiate the study of k-monotone functions in the area of property testing, and Newman et al. (SODA 2017) study testability of families characterized by freeness from order patterns on real-valued functions over the line [n] domain. We study k-monotone functions in the more relaxed parametrized property testing model, introduced by Parnas et al. (JCSS, 72(6), 2006). In this process we resolve a problem left open in previous work. Specifically, our results include the following. 1) Testing 2-monotonicity on the hypercube non-adaptively with one-sided error requires an exponential in sqrt{n} number of queries. This behavior shows a stark contrast with testing (1-)monotonicity, which only needs O~(sqrt{n}) queries (Khot et al. (FOCS 2015)). Furthermore, even the apparently easier task of distinguishing 2-monotone functions from functions that are far from being n^{.01}-monotone also requires an exponential number of queries. 2) On the hypercube [n]^d domain, there exists a testing algorithm that makes a constant number of queries and distinguishes functions that are k-monotone from functions that are far from being O(kd^2) -monotone. Such a dependency is likely necessary, given the lower bound above for the hypercube.

Cite as

Elena Grigorescu, Akash Kumar, and Karl Wimmer. Flipping out with Many Flips: Hardness of Testing k-Monotonicity. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 116, pp. 40:1-40:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{grigorescu_et_al:LIPIcs.APPROX-RANDOM.2018.40,
  author =	{Grigorescu, Elena and Kumar, Akash and Wimmer, Karl},
  title =	{{Flipping out with Many Flips: Hardness of Testing k-Monotonicity}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)},
  pages =	{40:1--40:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-085-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{116},
  editor =	{Blais, Eric and Jansen, Klaus and D. P. Rolim, Jos\'{e} and Steurer, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2018.40},
  URN =		{urn:nbn:de:0030-drops-94448},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2018.40},
  annote =	{Keywords: Property Testing, Boolean Functions, k-Monotonicity, Lower Bounds}
}
Document
Brief Announcement
Brief Announcement: Relaxed Locally Correctable Codes in Computationally Bounded Channels

Authors: Jeremiah Blocki, Venkata Gandikota, Elena Grigorescu, and Samson Zhou

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
We study variants of locally decodable and locally correctable codes in computationally bounded, adversarial channels, under the assumption that collision-resistant hash functions exist, and with no public-key or private-key cryptographic setup. Specifically, we provide constructions of relaxed locally correctable and relaxed locally decodable codes over the binary alphabet, with constant information rate, and poly-logarithmic locality. Our constructions compare favorably with existing schemes built under much stronger cryptographic assumptions, and with their classical analogues in the computationally unbounded, Hamming channel. Our constructions crucially employ collision-resistant hash functions and local expander graphs, extending ideas from recent cryptographic constructions of memory-hard functions.

Cite as

Jeremiah Blocki, Venkata Gandikota, Elena Grigorescu, and Samson Zhou. Brief Announcement: Relaxed Locally Correctable Codes in Computationally Bounded Channels. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 106:1-106:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{blocki_et_al:LIPIcs.ICALP.2018.106,
  author =	{Blocki, Jeremiah and Gandikota, Venkata and Grigorescu, Elena and Zhou, Samson},
  title =	{{Brief Announcement: Relaxed Locally Correctable Codes in Computationally Bounded Channels}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{106:1--106:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.106},
  URN =		{urn:nbn:de:0030-drops-91102},
  doi =		{10.4230/LIPIcs.ICALP.2018.106},
  annote =	{Keywords: Relaxed locally correctable codes, computationally bounded channels, local expanders}
}
Document
Streaming for Aibohphobes: Longest Palindrome with Mismatches

Authors: Elena Grigorescu, Erfan Sadeqi Azer, and Samson Zhou

Published in: LIPIcs, Volume 93, 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)


Abstract
A palindrome is a string that reads the same as its reverse, such as "aibohphobia" (fear of palindromes). Given a metric and an integer d>0, a d-near-palindrome} is a string of Hamming distance at most d from its reverse. We study the natural problem of identifying the longest d-near-palindrome in data streams. The problem is relevant to the analysis of DNA databases, and to the task of repairing recursive structures in documents such as XML and JSON. We present the first streaming algorithm for the longest d-near-palindrome problem that returns a d-near-palindrome whose length is within a multiplicative (1+\eps)-factor of the longest d-near-palindrome. Our algorithm also returns the set of mismatched indices in the d-near-palindrome, and uses O{\frac{d\log^7 n}{\eps\log(1+\eps)}} bits of space, and O{\frac{d\log^6 n}{\eps\log(1+\eps)}} update time per arrival symbol. We show that for d=o(\sqrt{n}), any randomized algorithm with multiplicative approximation (1+\eps) that succeeds with probability at least 1-1/n requires \Omega(d\log n) space. We further obtain a streaming algorithm that returns a d-near-palindrome whose length is within an additive E-error of the longest d-near-palindrome. The algorithm uses O{\frac{dn\log^6 n}{E}} bits of space and O{\frac{dn\log^5 n}{E}} update time. As before, we show that any randomized streaming algorithm that solves the longest d-near-palindrome problem for additive error E with probability at least 1-\frac{1}{n}, uses \Omega\left(\frac{dn}{E}\right) space. Finally, we give an exact two-pass algorithm that solves the longest d-near-palindrome problem using O{d^2\sqrt{n}\log^6 n} bits of space.

Cite as

Elena Grigorescu, Erfan Sadeqi Azer, and Samson Zhou. Streaming for Aibohphobes: Longest Palindrome with Mismatches. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 31:1-31:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{grigorescu_et_al:LIPIcs.FSTTCS.2017.31,
  author =	{Grigorescu, Elena and Sadeqi Azer, Erfan and Zhou, Samson},
  title =	{{Streaming for Aibohphobes: Longest Palindrome with Mismatches}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{31:1--31:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.31},
  URN =		{urn:nbn:de:0030-drops-84053},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.31},
  annote =	{Keywords: Longest palindrome with mismatches, Streaming algorithms, Hamming distance}
}
Document
Lattice-based Locality Sensitive Hashing is Optimal

Authors: Karthekeyan Chandrasekaran, Daniel Dadush, Venkata Gandikota, and Elena Grigorescu

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
Locality sensitive hashing (LSH) was introduced by Indyk and Motwani (STOC'98) to give the first sublinear time algorithm for the c-approximate nearest neighbor (ANN) problem using only polynomial space. At a high level, an LSH family hashes "nearby" points to the same bucket and "far away" points to different buckets. The quality of measure of an LSH family is its LSH exponent, which helps determine both query time and space usage. In a seminal work, Andoni and Indyk (FOCS '06) constructed an LSH family based on random ball partitionings of space that achieves an LSH exponent of 1/c^2 for the l_2 norm, which was later shown to be optimal by Motwani, Naor and Panigrahy (SIDMA '07) and O'Donnell, Wu and Zhou (TOCT '14). Although optimal in the LSH exponent, the ball partitioning approach is computationally expensive. So, in the same work, Andoni and Indyk proposed a simpler and more practical hashing scheme based on Euclidean lattices and provided computational results using the 24-dimensional Leech lattice. However, no theoretical analysis of the scheme was given, thus leaving open the question of finding the exponent of lattice based LSH. In this work, we resolve this question by showing the existence of lattices achieving the optimal LSH exponent of 1/c^2 using techniques from the geometry of numbers. At a more conceptual level, our results show that optimal LSH space partitions can have periodic structure. Understanding the extent to which additional structure can be imposed on these partitions, e.g. to yield low space and query complexity, remains an important open problem.

Cite as

Karthekeyan Chandrasekaran, Daniel Dadush, Venkata Gandikota, and Elena Grigorescu. Lattice-based Locality Sensitive Hashing is Optimal. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 42:1-42:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chandrasekaran_et_al:LIPIcs.ITCS.2018.42,
  author =	{Chandrasekaran, Karthekeyan and Dadush, Daniel and Gandikota, Venkata and Grigorescu, Elena},
  title =	{{Lattice-based Locality Sensitive Hashing is Optimal}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{42:1--42:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.42},
  URN =		{urn:nbn:de:0030-drops-83470},
  doi =		{10.4230/LIPIcs.ITCS.2018.42},
  annote =	{Keywords: Locality Sensitive Hashing, Approximate Nearest Neighbor Search, Random Lattices}
}
Document
Testing k-Monotonicity

Authors: Clément L. Canonne, Elena Grigorescu, Siyao Guo, Akash Kumar, and Karl Wimmer

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
A Boolean k-monotone function defined over a finite poset domain D alternates between the values 0 and 1 at most k times on any ascending chain in D. Therefore, k-monotone functions are natural generalizations of the classical monotone functions, which are the 1-monotone functions. Motivated by the recent interest in k-monotone functions in the context of circuit complexity and learning theory, and by the central role that monotonicity testing plays in the context of property testing, we initiate a systematic study of k-monotone functions, in the property testing model. In this model, the goal is to distinguish functions that are k-monotone (or are close to being k-monotone) from functions that are far from being k-monotone. Our results include the following: 1. We demonstrate a separation between testing k-monotonicity and testing monotonicity, on the hypercube domain {0,1}^d, for k >= 3; 2. We demonstrate a separation between testing and learning on {0,1}^d, for k=\omega(\log d): testing k-monotonicity can be performed with 2^{O(\sqrt d . \log d . \log{1/\eps})} queries, while learning k-monotone functions requires 2^{\Omega(k . \sqrt d .{1/\eps})} queries (Blais et al. (RANDOM 2015)). 3. We present a tolerant test for functions f\colon[n]^d\to \{0,1\}$with complexity independent of n, which makes progress on a problem left open by Berman et al. (STOC 2014). Our techniques exploit the testing-by-learning paradigm, use novel applications of Fourier analysis on the grid [n]^d, and draw connections to distribution testing techniques. Our techniques exploit the testing-by-learning paradigm, use novel applications of Fourier analysis on the grid [n]^d, and draw connections to distribution testing techniques.

Cite as

Clément L. Canonne, Elena Grigorescu, Siyao Guo, Akash Kumar, and Karl Wimmer. Testing k-Monotonicity. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 29:1-29:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{canonne_et_al:LIPIcs.ITCS.2017.29,
  author =	{Canonne, Cl\'{e}ment L. and Grigorescu, Elena and Guo, Siyao and Kumar, Akash and Wimmer, Karl},
  title =	{{Testing k-Monotonicity}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{29:1--29:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.29},
  URN =		{urn:nbn:de:0030-drops-81583},
  doi =		{10.4230/LIPIcs.ITCS.2017.29},
  annote =	{Keywords: Boolean Functions, Learning, Monotonicity, Property Testing}
}
Document
Streaming Periodicity with Mismatches

Authors: Funda Ergün, Elena Grigorescu, Erfan Sadeqi Azer, and Samson Zhou

Published in: LIPIcs, Volume 81, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)


Abstract
We study the problem of finding all k-periods of a length-n string S, presented as a data stream. S is said to have k-period p if its prefix of length n-p differs from its suffix of length n-p in at most k locations. We give a one-pass streaming algorithm that computes the k-periods of a string S using poly(k, log n) bits of space, for k-periods of length at most n/2. We also present a two-pass streaming algorithm that computes k-periods of S using poly(k, log n) bits of space, regardless of period length. We complement these results with comparable lower bounds.

Cite as

Funda Ergün, Elena Grigorescu, Erfan Sadeqi Azer, and Samson Zhou. Streaming Periodicity with Mismatches. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 81, pp. 42:1-42:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{ergun_et_al:LIPIcs.APPROX-RANDOM.2017.42,
  author =	{Erg\"{u}n, Funda and Grigorescu, Elena and Sadeqi Azer, Erfan and Zhou, Samson},
  title =	{{Streaming Periodicity with Mismatches}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)},
  pages =	{42:1--42:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-044-6},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{81},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} D. P. and Williamson, David P. and Vempala, Santosh S.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2017.42},
  URN =		{urn:nbn:de:0030-drops-75911},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2017.42},
  annote =	{Keywords: String algorithms, Streaming algorithms, Pattern matching, Randomized algorithms, Sublinear algorithms}
}
Document
Local Testing for Membership in Lattices

Authors: Karthekeyan Chandrasekaran, Mahdi Cheraghchi, Venkata Gandikota, and Elena Grigorescu

Published in: LIPIcs, Volume 65, 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)


Abstract
Testing membership in lattices is of practical relevance, with applications to integer programming, error detection in lattice-based communication and cryptography. In this work, we initiate a systematic study of local testing for membership in lattices, complementing and building upon the extensive body of work on locally testable codes. In particular, we formally define the notion of local tests for lattices and present the following: 1. We show that in order to achieve low query complexity, it is sufficient to design one-sided non-adaptive canonical tests. This result is akin to, and based on an analogous result for error-correcting codes due to Ben-Sasson et al. (SIAM J. Computing, 35(1):1-21). 2. We demonstrate upper and lower bounds on the query complexity of local testing for membership in code formula lattices. We instantiate our results for code formula lattices constructed from Reed-Muller codes to obtain nearly-matching upper and lower bounds on the query complexity of testing such lattices. 3. We contrast lattice testing from code testing by showing lower bounds on the query complexity of testing low-dimensional lattices. This illustrates large lower bounds on the query complexity of testing membership in knapsack lattices. On the other hand, we show that knapsack lattices with bounded coefficients have low-query testers if the inputs are promised to lie in the span of the lattice.

Cite as

Karthekeyan Chandrasekaran, Mahdi Cheraghchi, Venkata Gandikota, and Elena Grigorescu. Local Testing for Membership in Lattices. In 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 65, pp. 46:1-46:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chandrasekaran_et_al:LIPIcs.FSTTCS.2016.46,
  author =	{Chandrasekaran, Karthekeyan and Cheraghchi, Mahdi and Gandikota, Venkata and Grigorescu, Elena},
  title =	{{Local Testing for Membership in Lattices}},
  booktitle =	{36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)},
  pages =	{46:1--46:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-027-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{65},
  editor =	{Lal, Akash and Akshay, S. and Saurabh, Saket and Sen, Sandeep},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2016.46},
  URN =		{urn:nbn:de:0030-drops-68818},
  doi =		{10.4230/LIPIcs.FSTTCS.2016.46},
  annote =	{Keywords: Lattices, Property Testing, Locally Testable Codes, Complexity Theory}
}
Document
AC^0 o MOD_2 Lower Bounds for the Boolean Inner Product

Authors: Mahdi Cheraghchi, Elena Grigorescu, Brendan Juba, Karl Wimmer, and Ning Xie

Published in: LIPIcs, Volume 55, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)


Abstract
AC^0 o MOD_2 circuits are AC^0 circuits augmented with a layer of parity gates just above the input layer. We study AC^0 o MOD2 circuit lower bounds for computing the Boolean Inner Product functions. Recent works by Servedio and Viola (ECCC TR12-144) and Akavia et al. (ITCS 2014) have highlighted this problem as a frontier problem in circuit complexity that arose both as a first step towards solving natural special cases of the matrix rigidity problem and as a candidate for constructing pseudorandom generators of minimal complexity. We give the first superlinear lower bound for the Boolean Inner Product function against AC^0 o MOD2 of depth four or greater. Specifically, we prove a superlinear lower bound for circuits of arbitrary constant depth, and an ~Omega(n^2) lower bound for the special case of depth-4 AC^0 o MOD_2. Our proof of the depth-4 lower bound employs a new "moment-matching" inequality for bounded, nonnegative integer-valued random variables that may be of independent interest: we prove an optimal bound on the maximum difference between two discrete distributions’ values at 0, given that their first d moments match.

Cite as

Mahdi Cheraghchi, Elena Grigorescu, Brendan Juba, Karl Wimmer, and Ning Xie. AC^0 o MOD_2 Lower Bounds for the Boolean Inner Product. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 35:1-35:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{cheraghchi_et_al:LIPIcs.ICALP.2016.35,
  author =	{Cheraghchi, Mahdi and Grigorescu, Elena and Juba, Brendan and Wimmer, Karl and Xie, Ning},
  title =	{{AC^0 o MOD\underline2 Lower Bounds for the Boolean Inner Product}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{35:1--35:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.35},
  URN =		{urn:nbn:de:0030-drops-63150},
  doi =		{10.4230/LIPIcs.ICALP.2016.35},
  annote =	{Keywords: Boolean analysis, circuit complexity, lower bounds}
}
Document
Deciding Orthogonality in Construction-A Lattices

Authors: Karthekeyan Chandrasekaran, Venkata Gandikota, and Elena Grigorescu

Published in: LIPIcs, Volume 45, 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)


Abstract
Lattices are discrete mathematical objects with widespread applications to integer programs as well as modern cryptography. A fundamental problem in both domains is the Closest Vector Problem (popularly known as CVP). It is well-known that CVP can be easily solved in lattices that have an orthogonal basis if the orthogonal basis is specified. This motivates the orthogonality decision problem: verify whether a given lattice has an orthogonal basis. Surprisingly, the orthogonality decision problem is not known to be either NP-complete or in P. In this paper, we focus on the orthogonality decision problem for a well-known family of lattices, namely Construction-A lattices. These are lattices of the form C + qZ^n, where C is an error-correcting q-ary code, and are studied in communication settings. We provide a complete characterization of lattices obtained from binary and ternary codes using Construction- A that have an orthogonal basis. This characterization leads to an efficient algorithm solving the orthogonality decision problem, which also finds an orthogonal basis if one exists for this family of lattices. We believe that these results could provide a better understanding of the complexity of the orthogonality decision problem in general.

Cite as

Karthekeyan Chandrasekaran, Venkata Gandikota, and Elena Grigorescu. Deciding Orthogonality in Construction-A Lattices. In 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 45, pp. 151-162, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{chandrasekaran_et_al:LIPIcs.FSTTCS.2015.151,
  author =	{Chandrasekaran, Karthekeyan and Gandikota, Venkata and Grigorescu, Elena},
  title =	{{Deciding Orthogonality in Construction-A Lattices}},
  booktitle =	{35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)},
  pages =	{151--162},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-97-2},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{45},
  editor =	{Harsha, Prahladh and Ramalingam, G.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2015.151},
  URN =		{urn:nbn:de:0030-drops-56509},
  doi =		{10.4230/LIPIcs.FSTTCS.2015.151},
  annote =	{Keywords: Orthogonal Lattices, Construction-A, Orthogonal Decomposition, Lattice isomorphism}
}
Document
Efficient and Error-Correcting Data Structures for Membership and Polynomial Evaluation

Authors: Victor Chen, Elena Grigorescu, and Ronald de Wolf

Published in: LIPIcs, Volume 5, 27th International Symposium on Theoretical Aspects of Computer Science (2010)


Abstract
We construct efficient data structures that are resilient against a constant fraction of adversarial noise. Our model requires that the decoder answers \emph{most} queries correctly with high probability and for the remaining queries, the decoder with high probability either answers correctly or declares ``don't know.'' Furthermore, if there is no noise on the data structure, it answers \emph{all} queries correctly with high probability. Our model is the common generalization of an error-correcting data structure model proposed recently by de~Wolf, and the notion of ``relaxed locally decodable codes'' developed in the PCP literature. We measure the efficiency of a data structure in terms of its \emph{length} (the number of bits in its representation), and query-answering time, measured by the number of \emph{bit-probes} to the (possibly corrupted) representation. We obtain results for the following two data structure problems: \begin{itemize} \item (Membership) Store a subset $S$ of size at most $s$ from a universe of size $n$ such that membership queries can be answered efficiently, i.e., decide if a given element from the universe is in $S$. \\ We construct an error-correcting data structure for this problem with length nearly linear in $s\log n$ that answers membership queries with $O(1)$ bit-probes. This nearly matches the asymptotically optimal parameters for the noiseless case: length $O(s\log n)$ and one bit-probe, due to Buhrman, Miltersen, Radhakrishnan, and Venkatesh. \item (Univariate polynomial evaluation) Store a univariate polynomial $g$ of degree $\deg(g)\leq s$ over the integers modulo $n$ such that evaluation queries can be answered efficiently, i.e., we can evaluate the output of $g$ on a given integer modulo $n$. \\ We construct an error-correcting data structure for this problem with length nearly linear in $s\log n$ that answers evaluation queries with $\polylog s\cdot\log^{1+o(1)}n$ bit-probes. This nearly matches the parameters of the best-known noiseless construction, due to Kedlaya and Umans. \end{itemize}

Cite as

Victor Chen, Elena Grigorescu, and Ronald de Wolf. Efficient and Error-Correcting Data Structures for Membership and Polynomial Evaluation. In 27th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 5, pp. 203-214, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.STACS.2010.2455,
  author =	{Chen, Victor and Grigorescu, Elena and de Wolf, Ronald},
  title =	{{Efficient and Error-Correcting Data Structures for Membership and Polynomial Evaluation}},
  booktitle =	{27th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{203--214},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-16-3},
  ISSN =	{1868-8969},
  year =	{2010},
  volume =	{5},
  editor =	{Marion, Jean-Yves and Schwentick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2455},
  URN =		{urn:nbn:de:0030-drops-24558},
  doi =		{10.4230/LIPIcs.STACS.2010.2455},
  annote =	{Keywords: Data Structures, Error-Correcting Codes, Membership, Polynomial Evaluation}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail