69 Search Results for "Rao, Anup"


Volume

LIPIcs, Volume 40

Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)

APPROX/RANDOM 2015, August 24-26, 2015, Princeton, USA

Editors: Naveen Garg, Klaus Jansen, Anup Rao, and José D. P. Rolim

Document
Track A: Algorithms, Complexity and Games
Searching for Regularity in Bounded Functions

Authors: Siddharth Iyer and Michael Whitmeyer

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
Given a function f on F₂ⁿ, we study the following problem. What is the largest affine subspace 𝒰 such that when restricted to 𝒰, all the non-trivial Fourier coefficients of f are very small? For the natural class of bounded Fourier degree d functions f: F₂ⁿ → [-1,1], we show that there exists an affine subspace of dimension at least Ω(n^{1/d!} k^{-2}), wherein all of f’s nontrivial Fourier coefficients become smaller than 2^{-k}. To complement this result, we show the existence of degree d functions with coefficients larger than 2^{-d log n} when restricted to any affine subspace of dimension larger than Ω(d n^{1/(d-1)}). In addition, we give explicit examples of functions with analogous but weaker properties. Along the way, we provide multiple characterizations of the Fourier coefficients of functions restricted to subspaces of F₂ⁿ that may be useful in other contexts. Finally, we highlight applications and connections of our results to parity kill number and affine dispersers.

Cite as

Siddharth Iyer and Michael Whitmeyer. Searching for Regularity in Bounded Functions. In 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 261, pp. 83:1-83:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{iyer_et_al:LIPIcs.ICALP.2023.83,
  author =	{Iyer, Siddharth and Whitmeyer, Michael},
  title =	{{Searching for Regularity in Bounded Functions}},
  booktitle =	{50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)},
  pages =	{83:1--83:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-278-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{261},
  editor =	{Etessami, Kousha and Feige, Uriel and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2023.83},
  URN =		{urn:nbn:de:0030-drops-181351},
  doi =		{10.4230/LIPIcs.ICALP.2023.83},
  annote =	{Keywords: regularity, bounded function, Boolean function, Fourier analysis}
}
Document
Equivalence of Systematic Linear Data Structures and Matrix Rigidity

Authors: Sivaramakrishnan Natarajan Ramamoorthy and Cyrus Rashtchian

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
Recently, Dvir, Golovnev, and Weinstein have shown that sufficiently strong lower bounds for linear data structures would imply new bounds for rigid matrices. However, their result utilizes an algorithm that requires an NP oracle, and hence, the rigid matrices are not explicit. In this work, we derive an equivalence between rigidity and the systematic linear model of data structures. For the n-dimensional inner product problem with m queries, we prove that lower bounds on the query time imply rigidity lower bounds for the query set itself. In particular, an explicit lower bound of ω(n/r log m) for r redundant storage bits would yield better rigidity parameters than the best bounds due to Alon, Panigrahy, and Yekhanin. We also prove a converse result, showing that rigid matrices directly correspond to hard query sets for the systematic linear model. As an application, we prove that the set of vectors obtained from rank one binary matrices is rigid with parameters matching the known results for explicit sets. This implies that the vector-matrix-vector problem requires query time Ω(n^(3/2)/r) for redundancy r ≥ √n in the systematic linear model, improving a result of Chakraborty, Kamma, and Larsen. Finally, we prove a cell probe lower bound for the vector-matrix-vector problem in the high error regime, improving a result of Chattopadhyay, Koucký, Loff, and Mukhopadhyay.

Cite as

Sivaramakrishnan Natarajan Ramamoorthy and Cyrus Rashtchian. Equivalence of Systematic Linear Data Structures and Matrix Rigidity. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 35:1-35:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{natarajanramamoorthy_et_al:LIPIcs.ITCS.2020.35,
  author =	{Natarajan Ramamoorthy, Sivaramakrishnan and Rashtchian, Cyrus},
  title =	{{Equivalence of Systematic Linear Data Structures and Matrix Rigidity}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{35:1--35:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.35},
  URN =		{urn:nbn:de:0030-drops-117204},
  doi =		{10.4230/LIPIcs.ITCS.2020.35},
  annote =	{Keywords: matrix rigidity, systematic linear data structures, cell probe model, communication complexity}
}
Document
Trade-Offs in Distributed Interactive Proofs

Authors: Pierluigi Crescenzi, Pierre Fraigniaud, and Ami Paz

Published in: LIPIcs, Volume 146, 33rd International Symposium on Distributed Computing (DISC 2019)


Abstract
The study of interactive proofs in the context of distributed network computing is a novel topic, recently introduced by Kol, Oshman, and Saxena [PODC 2018]. In the spirit of sequential interactive proofs theory, we study the power of distributed interactive proofs. This is achieved via a series of results establishing trade-offs between various parameters impacting the power of interactive proofs, including the number of interactions, the certificate size, the communication complexity, and the form of randomness used. Our results also connect distributed interactive proofs with the established field of distributed verification. In general, our results contribute to providing structure to the landscape of distributed interactive proofs.

Cite as

Pierluigi Crescenzi, Pierre Fraigniaud, and Ami Paz. Trade-Offs in Distributed Interactive Proofs. In 33rd International Symposium on Distributed Computing (DISC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 146, pp. 13:1-13:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{crescenzi_et_al:LIPIcs.DISC.2019.13,
  author =	{Crescenzi, Pierluigi and Fraigniaud, Pierre and Paz, Ami},
  title =	{{Trade-Offs in Distributed Interactive Proofs}},
  booktitle =	{33rd International Symposium on Distributed Computing (DISC 2019)},
  pages =	{13:1--13:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-126-9},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{146},
  editor =	{Suomela, Jukka},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2019.13},
  URN =		{urn:nbn:de:0030-drops-113202},
  doi =		{10.4230/LIPIcs.DISC.2019.13},
  annote =	{Keywords: Distributed interactive proofs, Distributed verification}
}
Document
Imperfect Gaps in Gap-ETH and PCPs

Authors: Mitali Bafna and Nikhil Vyas

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
We study the role of perfect completeness in probabilistically checkable proof systems (PCPs) and give a way to transform a PCP with imperfect completeness to one with perfect completeness, when the initial gap is a constant. We show that PCP_{c,s}[r,q] subseteq PCP_{1,s'}[r+O(1),q+O(r)] for c-s=Omega(1) which in turn implies that one can convert imperfect completeness to perfect in linear-sized PCPs for NP with a O(log n) additive loss in the query complexity q. We show our result by constructing a "robust circuit" using threshold gates. These results are a gap amplification procedure for PCPs, (when completeness is not 1) analogous to questions studied in parallel repetition [Anup Rao, 2011] and pseudorandomness [David Gillman, 1998] and might be of independent interest. We also investigate the time-complexity of approximating perfectly satisfiable instances of 3SAT versus those with imperfect completeness. We show that the Gap-ETH conjecture without perfect completeness is equivalent to Gap-ETH with perfect completeness, i.e. MAX 3SAT(1-epsilon,1-delta), delta > epsilon has 2^{o(n)} algorithms if and only if MAX 3SAT(1,1-delta) has 2^{o(n)} algorithms. We also relate the time complexities of these two problems in a more fine-grained way to show that T_2(n) <= T_1(n(log log n)^{O(1)}), where T_1(n),T_2(n) denote the randomized time-complexity of approximating MAX 3SAT with perfect and imperfect completeness respectively.

Cite as

Mitali Bafna and Nikhil Vyas. Imperfect Gaps in Gap-ETH and PCPs. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 32:1-32:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bafna_et_al:LIPIcs.CCC.2019.32,
  author =	{Bafna, Mitali and Vyas, Nikhil},
  title =	{{Imperfect Gaps in Gap-ETH and PCPs}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{32:1--32:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.32},
  URN =		{urn:nbn:de:0030-drops-108545},
  doi =		{10.4230/LIPIcs.CCC.2019.32},
  annote =	{Keywords: PCP, Gap-ETH, Hardness of Approximation}
}
Document
Track A: Algorithms, Complexity and Games
Lower Bounds on Balancing Sets and Depth-2 Threshold Circuits

Authors: Pavel Hrubeš, Sivaramakrishnan Natarajan Ramamoorthy, Anup Rao, and Amir Yehudayoff

Published in: LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)


Abstract
There are various notions of balancing set families that appear in combinatorics and computer science. For example, a family of proper non-empty subsets S_1,...,S_k subset [n] is balancing if for every subset X subset {1,2,...,n} of size n/2, there is an i in [k] so that |S_i cap X| = |S_i|/2. We extend and simplify the framework developed by Hegedűs for proving lower bounds on the size of balancing set families. We prove that if n=2p for a prime p, then k >= p. For arbitrary values of n, we show that k >= n/2 - o(n). We then exploit the connection between balancing families and depth-2 threshold circuits. This connection helps resolve a question raised by Kulikov and Podolskii on the fan-in of depth-2 majority circuits computing the majority function on n bits. We show that any depth-2 threshold circuit that computes the majority on n bits has at least one gate with fan-in at least n/2 - o(n). We also prove a sharp lower bound on the fan-in of depth-2 threshold circuits computing a specific weighted threshold function.

Cite as

Pavel Hrubeš, Sivaramakrishnan Natarajan Ramamoorthy, Anup Rao, and Amir Yehudayoff. Lower Bounds on Balancing Sets and Depth-2 Threshold Circuits. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 72:1-72:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{hrubes_et_al:LIPIcs.ICALP.2019.72,
  author =	{Hrube\v{s}, Pavel and Natarajan Ramamoorthy, Sivaramakrishnan and Rao, Anup and Yehudayoff, Amir},
  title =	{{Lower Bounds on Balancing Sets and Depth-2 Threshold Circuits}},
  booktitle =	{46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
  pages =	{72:1--72:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-109-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{132},
  editor =	{Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.72},
  URN =		{urn:nbn:de:0030-drops-106487},
  doi =		{10.4230/LIPIcs.ICALP.2019.72},
  annote =	{Keywords: Balancing sets, depth-2 threshold circuits, polynomials, majority, weighted thresholds}
}
Document
Lower Bounds on Non-Adaptive Data Structures Maintaining Sets of Numbers, from Sunflowers

Authors: Sivaramakrishnan Natarajan Ramamoorthy and Anup Rao

Published in: LIPIcs, Volume 102, 33rd Computational Complexity Conference (CCC 2018)


Abstract
We prove new cell-probe lower bounds for dynamic data structures that maintain a subset of {1,2,...,n}, and compute various statistics of the set. The data structure is said to handle insertions non-adaptively if the locations of memory accessed depend only on the element being inserted, and not on the contents of the memory. For any such data structure that can compute the median of the set, we prove that: t_{med} >= Omega(n^{1/(t_{ins}+1)}/(w^2 * t_{ins}^2)), where t_{ins} is the number of memory locations accessed during insertions, t_{med} is the number of memory locations accessed to compute the median, and w is the number of bits stored in each memory location. When the data structure is able to perform deletions non-adaptively and compute the minimum non-adaptively, we prove t_{min} + t_{del} >= Omega(log n /(log w + log log n)), where t_{min} is the number of locations accessed to compute the minimum, and t_{del} is the number of locations accessed to perform deletions. For the predecessor search problem, where the data structure is required to compute the predecessor of any element in the set, we prove that if computing the predecessors can be done non-adaptively, then either t_{pred} >= Omega(log n/(log log n + log w)), or t_{ins} >= Omega(n^{1/(2(t_{pred}+1))}), where t_{pred} is the number of locations accessed to compute predecessors. These bounds are nearly matched by Binary Search Trees in some range of parameters. Our results follow from using the Sunflower Lemma of Erdös and Rado [Paul Erdös and Richard Rado, 1960] together with several kinds of encoding arguments.

Cite as

Sivaramakrishnan Natarajan Ramamoorthy and Anup Rao. Lower Bounds on Non-Adaptive Data Structures Maintaining Sets of Numbers, from Sunflowers. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 27:1-27:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{natarajanramamoorthy_et_al:LIPIcs.CCC.2018.27,
  author =	{Natarajan Ramamoorthy, Sivaramakrishnan and Rao, Anup},
  title =	{{Lower Bounds on Non-Adaptive Data Structures Maintaining Sets of Numbers, from Sunflowers}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{27:1--27:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.27},
  URN =		{urn:nbn:de:0030-drops-88625},
  doi =		{10.4230/LIPIcs.CCC.2018.27},
  annote =	{Keywords: Non-adaptive data structures, Sunflower lemma}
}
Document
A Direct-Sum Theorem for Read-Once Branching Programs

Authors: Anup Rao and Makrand Sinha

Published in: LIPIcs, Volume 60, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016)


Abstract
We study a direct-sum question for read-once branching programs. If M(f) denotes the minimum average memory required to compute a function f(x_1,x_2, ..., x_n) how much memory is required to compute f on k independent inputs that arrive in parallel? We show that when the inputs are sampled independently from some domain X and M(f) = Omega(n), then computing the value of f on k streams requires average memory at least Omega(k * M(f)/n). Our results are obtained by defining new ways to measure the information complexity of read-once branching programs. We define two such measures: the transitional and cumulative information content. We prove that any read-once branching program with transitional information content I can be simulated using average memory O(n(I+1)). On the other hand, if every read-once branching program with cumulative information content I can be simulated with average memory O(I+1), then computing f on k inputs requires average memory at least Omega(k * (M(f)-1)).

Cite as

Anup Rao and Makrand Sinha. A Direct-Sum Theorem for Read-Once Branching Programs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 60, pp. 44:1-44:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{rao_et_al:LIPIcs.APPROX-RANDOM.2016.44,
  author =	{Rao, Anup and Sinha, Makrand},
  title =	{{A Direct-Sum Theorem for Read-Once Branching Programs}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016)},
  pages =	{44:1--44:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-018-7},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{60},
  editor =	{Jansen, Klaus and Mathieu, Claire and Rolim, Jos\'{e} D. P. and Umans, Chris},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2016.44},
  URN =		{urn:nbn:de:0030-drops-66676},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2016.44},
  annote =	{Keywords: Direct-sum, Information complexity, Streaming Algorithms}
}
Document
Complete Volume
LIPIcs, Volume 40, APPROX/RANDOM'15, Complete Volume

Authors: Naveen Garg, Klaus Jansen, Anup Rao, and José D. P. Rolim

Published in: LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)


Abstract
LIPIcs, Volume 40, APPROX/RANDOM'15, Complete Volume

Cite as

Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@Proceedings{garg_et_al:LIPIcs.APPROX-RANDOM.2015,
  title =	{{LIPIcs, Volume 40, APPROX/RANDOM'15, Complete Volume}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-89-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{40},
  editor =	{Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015},
  URN =		{urn:nbn:de:0030-drops-54012},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2015},
  annote =	{Keywords: Data Structures, Coding and Information Theory, Theory of Computation, Computation by Abstract Devices, Modes of Computation, Complexity Measures and Problem Complexity, Numerical Algorithms and Problems, Nonnumerical Algorithms and Problems, Approximation, Numerical Linear Algorithms and Problems}
}
Document
Front Matter
Frontmatter, Table of Contents, Preface, Program Commitees, External Reviewers, List of Authors

Authors: Naveen Garg, Klaus Jansen, Anup Rao, and José D. P. Rolim

Published in: LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)


Abstract
Frontmatter, Table of Contents, Preface, Program Commitees, External Reviewers, List of Authors

Cite as

Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. i-xviii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.APPROX-RANDOM.2015.i,
  author =	{Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.},
  title =	{{Frontmatter, Table of Contents, Preface, Program Commitees, External Reviewers, List of Authors}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)},
  pages =	{i--xviii},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-89-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{40},
  editor =	{Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.i},
  URN =		{urn:nbn:de:0030-drops-53474},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2015.i},
  annote =	{Keywords: Frontmatter, Table of Contents, Preface, Program Commitees, External Reviewers, List of Authors}
}
Document
On Guillotine Cutting Sequences

Authors: Fidaa Abed, Parinya Chalermsook, José Correa, Andreas Karrenbauer, Pablo Pérez-Lantero, José A. Soto, and Andreas Wiese

Published in: LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)


Abstract
Imagine a wooden plate with a set of non-overlapping geometric objects painted on it. How many of them can a carpenter cut out using a panel saw making guillotine cuts, i.e., only moving forward through the material along a straight line until it is split into two pieces? Already fifteen years ago, Pach and Tardos investigated whether one can always cut out a constant fraction if all objects are axis-parallel rectangles. However, even for the case of axis-parallel squares this question is still open. In this paper, we answer the latter affirmatively. Our result is constructive and holds even in a more general setting where the squares have weights and the goal is to save as much weight as possible. We further show that when solving the more general question for rectangles affirmatively with only axis-parallel cuts, this would yield a combinatorial O(1)-approximation algorithm for the Maximum Independent Set of Rectangles problem, and would thus solve a long-standing open problem. In practical applications, like the mentioned carpentry and many other settings, we can usually place the items freely that we want to cut out, which gives rise to the two-dimensional guillotine knapsack problem: Given a collection of axis-parallel rectangles without presumed coordinates, our goal is to place as many of them as possible in a square-shaped knapsack respecting the constraint that the placed objects can be separated by a sequence of guillotine cuts. Our main result for this problem is a quasi-PTAS, assuming the input data to be quasi-polynomially bounded integers. This factor matches the best known (quasi-polynomial time) result for (non-guillotine) two-dimensional knapsack.

Cite as

Fidaa Abed, Parinya Chalermsook, José Correa, Andreas Karrenbauer, Pablo Pérez-Lantero, José A. Soto, and Andreas Wiese. On Guillotine Cutting Sequences. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. 1-19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{abed_et_al:LIPIcs.APPROX-RANDOM.2015.1,
  author =	{Abed, Fidaa and Chalermsook, Parinya and Correa, Jos\'{e} and Karrenbauer, Andreas and P\'{e}rez-Lantero, Pablo and Soto, Jos\'{e} A. and Wiese, Andreas},
  title =	{{On Guillotine Cutting Sequences}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)},
  pages =	{1--19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-89-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{40},
  editor =	{Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.1},
  URN =		{urn:nbn:de:0030-drops-52917},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2015.1},
  annote =	{Keywords: Guillotine cuts, Rectangles, Squares, Independent Sets, Packing}
}
Document
Approximate Nearest Neighbor Search in Metrics of Planar Graphs

Authors: Ittai Abraham, Shiri Chechik, Robert Krauthgamer, and Udi Wieder

Published in: LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)


Abstract
We investigate the problem of approximate Nearest-Neighbor Search (NNS) in graphical metrics: The task is to preprocess an edge-weighted graph G=(V,E) on m vertices and a small "dataset" D \subset V of size n << m, so that given a query point q \in V, one can quickly approximate dist(q,D) (the distance from q to its closest vertex in D) and find a vertex a \in D within this approximated distance. We assume the query algorithm has access to a distance oracle, that quickly evaluates the exact distance between any pair of vertices. For planar graphs G with maximum degree Delta, we show how to efficiently construct a compact data structure -- of size ~O(n(Delta+1/epsilon)) -- that answers (1+epsilon)-NNS queries in time ~O(Delta+1/epsilon). Thus, as far as NNS applications are concerned, metrics derived from bounded-degree planar graphs behave as low-dimensional metrics, even though planar metrics do not necessarily have a low doubling dimension, nor can they be embedded with low distortion into l_2. We complement our algorithmic result by lower bounds showing that the access to an exact distance oracle (rather than an approximate one) and the dependency on Delta (in query time) are both essential.

Cite as

Ittai Abraham, Shiri Chechik, Robert Krauthgamer, and Udi Wieder. Approximate Nearest Neighbor Search in Metrics of Planar Graphs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. 20-42, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{abraham_et_al:LIPIcs.APPROX-RANDOM.2015.20,
  author =	{Abraham, Ittai and Chechik, Shiri and Krauthgamer, Robert and Wieder, Udi},
  title =	{{Approximate Nearest Neighbor Search in Metrics of Planar Graphs}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)},
  pages =	{20--42},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-89-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{40},
  editor =	{Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.20},
  URN =		{urn:nbn:de:0030-drops-52923},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2015.20},
  annote =	{Keywords: Data Structures, Nearest Neighbor Search, Planar Graphs, Planar Metrics, Planar Separator}
}
Document
How to Tame Rectangles: Solving Independent Set and Coloring of Rectangles via Shrinking

Authors: Anna Adamaszek, Parinya Chalermsook, and Andreas Wiese

Published in: LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)


Abstract
In the Maximum Weight Independent Set of Rectangles (MWISR) problem, we are given a collection of weighted axis-parallel rectangles in the plane. Our goal is to compute a maximum weight subset of pairwise non-overlapping rectangles. Due to its various applications, as well as connections to many other problems in computer science, MWISR has received a lot of attention from the computational geometry and the approximation algorithms community. However, despite being extensively studied, MWISR remains not very well understood in terms of polynomial time approximation algorithms, as there is a large gap between the upper and lower bounds, i.e., O(log n\ loglog n) v.s. NP-hardness. Another important, poorly understood question is whether one can color rectangles with at most O(omega(R)) colors where omega(R) is the size of a maximum clique in the intersection graph of a set of input rectangles R. Asplund and Grünbaum obtained an upper bound of O(omega(R)^2) about 50 years ago, and the result has remained asymptotically best. This question is strongly related to the integrality gap of the canonical LP for MWISR. In this paper, we settle above three open problems in a relaxed model where we are allowed to shrink the rectangles by a tiny bit (rescaling them by a factor of 1-delta for an arbitrarily small constant delta > 0. Namely, in this model, we show (i) a PTAS for MWISR and (ii) a coloring with O(omega(R)) colors which implies a constant upper bound on the integrality gap of the canonical LP. For some applications of MWISR the possibility to shrink the rectangles has a natural, well-motivated meaning. Our results can be seen as an evidence that the shrinking model is a promising way to relax a geometric problem for the purpose of better algorithmic results.

Cite as

Anna Adamaszek, Parinya Chalermsook, and Andreas Wiese. How to Tame Rectangles: Solving Independent Set and Coloring of Rectangles via Shrinking. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. 43-60, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{adamaszek_et_al:LIPIcs.APPROX-RANDOM.2015.43,
  author =	{Adamaszek, Anna and Chalermsook, Parinya and Wiese, Andreas},
  title =	{{How to Tame Rectangles: Solving Independent Set and Coloring of Rectangles via Shrinking}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)},
  pages =	{43--60},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-89-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{40},
  editor =	{Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.43},
  URN =		{urn:nbn:de:0030-drops-52936},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2015.43},
  annote =	{Keywords: Approximation algorithms, independent set, resource augmentation, rectangle intersection graphs, PTAS}
}
Document
Non-Uniform Robust Network Design in Planar Graphs

Authors: David Adjiashvili

Published in: LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)


Abstract
Robust optimization is concerned with constructing solutions that remain feasible also when a limited number of resources is removed from the solution. Most studies of robust combinatorial optimization to date made the assumption that every resource is equally vulnerable, and that the set of scenarios is implicitly given by a single budget constraint. This paper studies a robustness model of a different kind. We focus on Bulk-Robustness, a model recently introduced (Adjiashvili, Stiller, Zenklusen 2015) for addressing the need to model non-uniform failure patterns in systems. We significantly extend the techniques used by Adjiashvili et al. to design approximation algorithm for bulk-robust network design problems in planar graphs. Our techniques use an augmentation framework, combined with linear programming (LP) rounding that depends on a planar embedding of the input graph. A connection to cut covering problems and the dominating set problem in circle graphs is established. Our methods use few of the specifics of bulk-robust optimization, hence it is conceivable that they can be adapted to solve other robust network design problems.

Cite as

David Adjiashvili. Non-Uniform Robust Network Design in Planar Graphs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. 61-77, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{adjiashvili:LIPIcs.APPROX-RANDOM.2015.61,
  author =	{Adjiashvili, David},
  title =	{{Non-Uniform Robust Network Design in Planar Graphs}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)},
  pages =	{61--77},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-89-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{40},
  editor =	{Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.61},
  URN =		{urn:nbn:de:0030-drops-52948},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2015.61},
  annote =	{Keywords: Robust optimization, Network design, Planar graph, Approximation algorithm, LP rounding}
}
Document
Large Supports are Required for Well-Supported Nash Equilibria

Authors: Yogesh Anbalagan, Hao Huang, Shachar Lovett, Sergey Norin, Adrian Vetta, and Hehui Wu

Published in: LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)


Abstract
We prove that for any constant k and any epsilon < 1, there exist bimatrix win-lose games for which every epsilon-WSNE requires supports of cardinality greater than k. To do this, we provide a graph-theoretic characterization of win-lose games that possess epsilon-WSNE with constant cardinality supports. We then apply a result in additive number theory of Haight to construct win-lose games that do not satisfy the requirements of the characterization. These constructions disprove graph theoretic conjectures of Daskalakis, Mehta and Papadimitriou and Myers.

Cite as

Yogesh Anbalagan, Hao Huang, Shachar Lovett, Sergey Norin, Adrian Vetta, and Hehui Wu. Large Supports are Required for Well-Supported Nash Equilibria. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. 78-84, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{anbalagan_et_al:LIPIcs.APPROX-RANDOM.2015.78,
  author =	{Anbalagan, Yogesh and Huang, Hao and Lovett, Shachar and Norin, Sergey and Vetta, Adrian and Wu, Hehui},
  title =	{{Large Supports are Required for Well-Supported Nash Equilibria}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)},
  pages =	{78--84},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-89-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{40},
  editor =	{Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.78},
  URN =		{urn:nbn:de:0030-drops-52959},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2015.78},
  annote =	{Keywords: bimatrix games, well-supported Nash equilibria}
}
  • Refine by Author
  • 8 Rao, Anup
  • 5 Guruswami, Venkatesan
  • 4 Natarajan Ramamoorthy, Sivaramakrishnan
  • 3 Coja-Oghlan, Amin
  • 3 Jansen, Klaus
  • Show More...

  • Refine by Classification
  • 2 Theory of computation → Cell probe models and lower bounds
  • 2 Theory of computation → Circuit complexity
  • 1 Mathematics of computing
  • 1 Mathematics of computing → Combinatorics
  • 1 Theory of computation → Complexity theory and logic
  • Show More...

  • Refine by Keyword
  • 4 communication complexity
  • 3 Approximation algorithm
  • 3 approximation algorithms
  • 2 Approximation Algorithms
  • 2 Approximation resistance
  • Show More...

  • Refine by Type
  • 68 document
  • 1 volume

  • Refine by Publication Year
  • 62 2015
  • 3 2019
  • 1 2016
  • 1 2018
  • 1 2020
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail