10 Search Results for "Loff, Bruno"


Document
The Approximate Degree of Bipartite Perfect Matching

Authors: Gal Beniamini

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
The approximate degree of a Boolean function is the least degree of a real multilinear polynomial approximating it in the 𝓁_∞-norm over the Boolean hypercube. We show that the approximate degree of the Bipartite Perfect Matching function, which is the indicator over all bipartite graphs having a perfect matching of order n, is Θ̃(n^(3/2)). The upper bound is obtained by fully characterizing the unique multilinear polynomial representing the Boolean dual of the perfect matching function, over the reals. Crucially, we show that this polynomial has very small 𝓁₁-norm - only exponential in Θ(n log n). The lower bound follows by bounding the spectral sensitivity of the perfect matching function, which is the spectral radius of its cut-graph on the hypercube [Aaronson et al., 2021; Huang, 2019]. We show that the spectral sensitivity of perfect matching is exactly Θ(n^(3/2)).

Cite as

Gal Beniamini. The Approximate Degree of Bipartite Perfect Matching. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 1:1-1:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{beniamini:LIPIcs.CCC.2022.1,
  author =	{Beniamini, Gal},
  title =	{{The Approximate Degree of Bipartite Perfect Matching}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{1:1--1:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.1},
  URN =		{urn:nbn:de:0030-drops-165634},
  doi =		{10.4230/LIPIcs.CCC.2022.1},
  annote =	{Keywords: Bipartite Perfect Matching, Boolean Functions, Approximate Degree}
}
Document
Memory Compression with Quantum Random-Access Gates

Authors: Harry Buhrman, Bruno Loff, Subhasree Patro, and Florian Speelman

Published in: LIPIcs, Volume 232, 17th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2022)


Abstract
In the classical RAM, we have the following useful property. If we have an algorithm that uses M memory cells throughout its execution, and in addition is sparse, in the sense that, at any point in time, only m out of M cells will be non-zero, then we may "compress" it into another algorithm which uses only m log M memory and runs in almost the same time. We may do so by simulating the memory using either a hash table, or a self-balancing tree. We show an analogous result for quantum algorithms equipped with quantum random-access gates. If we have a quantum algorithm that runs in time T and uses M qubits, such that the state of the memory, at any time step, is supported on computational-basis vectors of Hamming weight at most m, then it can be simulated by another algorithm which uses only O(m log M) memory, and runs in time Õ(T). We show how this theorem can be used, in a black-box way, to simplify the presentation in several papers. Broadly speaking, when there exists a need for a space-efficient history-independent quantum data-structure, it is often possible to construct a space-inefficient, yet sparse, quantum data structure, and then appeal to our main theorem. This results in simpler and shorter arguments.

Cite as

Harry Buhrman, Bruno Loff, Subhasree Patro, and Florian Speelman. Memory Compression with Quantum Random-Access Gates. In 17th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 232, pp. 10:1-10:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{buhrman_et_al:LIPIcs.TQC.2022.10,
  author =	{Buhrman, Harry and Loff, Bruno and Patro, Subhasree and Speelman, Florian},
  title =	{{Memory Compression with Quantum Random-Access Gates}},
  booktitle =	{17th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2022)},
  pages =	{10:1--10:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-237-2},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{232},
  editor =	{Le Gall, Fran\c{c}ois and Morimae, Tomoyuki},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.TQC.2022.10},
  URN =		{urn:nbn:de:0030-drops-165177},
  doi =		{10.4230/LIPIcs.TQC.2022.10},
  annote =	{Keywords: complexity theory, data structures, algorithms, quantum walk}
}
Document
Limits of Quantum Speed-Ups for Computational Geometry and Other Problems: Fine-Grained Complexity via Quantum Walks

Authors: Harry Buhrman, Bruno Loff, Subhasree Patro, and Florian Speelman

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
Many computational problems are subject to a quantum speed-up: one might find that a problem having an O(n³)-time or O(n²)-time classic algorithm can be solved by a known O(n^{1.5})-time or O(n)-time quantum algorithm. The question naturally arises: how much quantum speed-up is possible? The area of fine-grained complexity allows us to prove optimal lower-bounds on the complexity of various computational problems, based on the conjectured hardness of certain natural, well-studied problems. This theory has recently been extended to the quantum setting, in two independent papers by Buhrman, Patro and Speelman [Buhrman et al., 2021], and by Aaronson, Chia, Lin, Wang, and Zhang [Aaronson et al., 2020]. In this paper, we further extend the theory of fine-grained complexity to the quantum setting. A fundamental conjecture in the classical setting states that the 3SUM problem cannot be solved by (classical) algorithms in time O(n^{2-ε}), for any ε > 0. We formulate an analogous conjecture, the Quantum-3SUM-Conjecture, which states that there exist no sublinear O(n^{1-ε})-time quantum algorithms for the 3SUM problem. Based on the Quantum-3SUM-Conjecture, we show new lower-bounds on the time complexity of quantum algorithms for several computational problems. Most of our lower-bounds are optimal, in that they match known upper-bounds, and hence they imply tight limits on the quantum speedup that is possible for these problems. These results are proven by adapting to the quantum setting known classical fine-grained reductions from the 3SUM problem. This adaptation is not trivial, however, since the original classical reductions require pre-processing the input in various ways, e.g. by sorting it according to some order, and this pre-processing (provably) cannot be done in sublinear quantum time. We overcome this bottleneck by combining a quantum walk with a classical dynamic data-structure having a certain "history-independence" property. This type of construction has been used in the past to prove upper bounds, and here we use it for the first time as part of a reduction. This general proof strategy allows us to prove tight lower bounds on several computational-geometry problems, on Convolution-3SUM and on the 0-Edge-Weight-Triangle problem, conditional on the Quantum-3SUM-Conjecture. We believe this proof strategy will be useful in proving tight (conditional) lower-bounds, and limits on quantum speed-ups, for many other problems.

Cite as

Harry Buhrman, Bruno Loff, Subhasree Patro, and Florian Speelman. Limits of Quantum Speed-Ups for Computational Geometry and Other Problems: Fine-Grained Complexity via Quantum Walks. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 31:1-31:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{buhrman_et_al:LIPIcs.ITCS.2022.31,
  author =	{Buhrman, Harry and Loff, Bruno and Patro, Subhasree and Speelman, Florian},
  title =	{{Limits of Quantum Speed-Ups for Computational Geometry and Other Problems: Fine-Grained Complexity via Quantum Walks}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{31:1--31:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.31},
  URN =		{urn:nbn:de:0030-drops-156273},
  doi =		{10.4230/LIPIcs.ITCS.2022.31},
  annote =	{Keywords: complexity theory, fine-grained complexity, 3SUM, computational geometry problems, data structures, quantum walk}
}
Document
Hardness of Constant-Round Communication Complexity

Authors: Shuichi Hirahara, Rahul Ilango, and Bruno Loff

Published in: LIPIcs, Volume 200, 36th Computational Complexity Conference (CCC 2021)


Abstract
How difficult is it to compute the communication complexity of a two-argument total Boolean function f:[N]×[N] → {0,1}, when it is given as an N×N binary matrix? In 2009, Kushilevitz and Weinreb showed that this problem is cryptographically hard, but it is still open whether it is NP-hard. In this work, we show that it is NP-hard to approximate the size (number of leaves) of the smallest constant-round protocol for a two-argument total Boolean function f:[N]×[N] → {0,1}, when it is given as an N×N binary matrix. Along the way to proving this, we show a new deterministic variant of the round elimination lemma, which may be of independent interest.

Cite as

Shuichi Hirahara, Rahul Ilango, and Bruno Loff. Hardness of Constant-Round Communication Complexity. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 31:1-31:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.CCC.2021.31,
  author =	{Hirahara, Shuichi and Ilango, Rahul and Loff, Bruno},
  title =	{{Hardness of Constant-Round Communication Complexity}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{31:1--31:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.31},
  URN =		{urn:nbn:de:0030-drops-143055},
  doi =		{10.4230/LIPIcs.CCC.2021.31},
  annote =	{Keywords: NP-completeness, Communication Complexity, Round Elimination Lemma, Meta-Complexity}
}
Document
Lower Bounds for Semi-adaptive Data Structures via Corruption

Authors: Pavel Dvořák and Bruno Loff

Published in: LIPIcs, Volume 182, 40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020)


Abstract
In a dynamic data structure problem we wish to maintain an encoding of some data in memory, in such a way that we may efficiently carry out a sequence of queries and updates to the data. A long-standing open problem in this area is to prove an unconditional polynomial lower bound of a trade-off between the update time and the query time of an adaptive dynamic data structure computing some explicit function. Ko and Weinstein provided such lower bound for a restricted class of semi-adaptive data structures, which compute the Disjointness function. There, the data are subsets x₁,… ,x_k and y of {1,… ,n}, the updates can modify y (by inserting and removing elements), and the queries are an index i ∈ {1,… ,k} (query i should answer whether x_i and y are disjoint, i.e., it should compute the Disjointness function applied to (x_i, y)). The semi-adaptiveness places a restriction in how the data structure can be accessed in order to answer a query. We generalize the lower bound of Ko and Weinstein to work not just for the Disjointness, but for any function having high complexity under the smooth corruption bound.

Cite as

Pavel Dvořák and Bruno Loff. Lower Bounds for Semi-adaptive Data Structures via Corruption. In 40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 182, pp. 20:1-20:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{dvorak_et_al:LIPIcs.FSTTCS.2020.20,
  author =	{Dvo\v{r}\'{a}k, Pavel and Loff, Bruno},
  title =	{{Lower Bounds for Semi-adaptive Data Structures via Corruption}},
  booktitle =	{40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020)},
  pages =	{20:1--20:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-174-0},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{182},
  editor =	{Saxena, Nitin and Simon, Sunil},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2020.20},
  URN =		{urn:nbn:de:0030-drops-132617},
  doi =		{10.4230/LIPIcs.FSTTCS.2020.20},
  annote =	{Keywords: semi-adaptive dynamic data structure, polynomial lower bound, corruption bound, information theory}
}
Document
NP-Hardness of Circuit Minimization for Multi-Output Functions

Authors: Rahul Ilango, Bruno Loff, and Igor C. Oliveira

Published in: LIPIcs, Volume 169, 35th Computational Complexity Conference (CCC 2020)


Abstract
Can we design efficient algorithms for finding fast algorithms? This question is captured by various circuit minimization problems, and algorithms for the corresponding tasks have significant practical applications. Following the work of Cook and Levin in the early 1970s, a central question is whether minimizing the circuit size of an explicitly given function is NP-complete. While this is known to hold in restricted models such as DNFs, making progress with respect to more expressive classes of circuits has been elusive. In this work, we establish the first NP-hardness result for circuit minimization of total functions in the setting of general (unrestricted) Boolean circuits. More precisely, we show that computing the minimum circuit size of a given multi-output Boolean function f : {0,1}^n → {0,1}^m is NP-hard under many-one polynomial-time randomized reductions. Our argument builds on a simpler NP-hardness proof for the circuit minimization problem for (single-output) Boolean functions under an extended set of generators. Complementing these results, we investigate the computational hardness of minimizing communication. We establish that several variants of this problem are NP-hard under deterministic reductions. In particular, unless 𝖯 = 𝖭𝖯, no polynomial-time computable function can approximate the deterministic two-party communication complexity of a partial Boolean function up to a polynomial. This has consequences for the class of structural results that one might hope to show about the communication complexity of partial functions.

Cite as

Rahul Ilango, Bruno Loff, and Igor C. Oliveira. NP-Hardness of Circuit Minimization for Multi-Output Functions. In 35th Computational Complexity Conference (CCC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 169, pp. 22:1-22:36, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ilango_et_al:LIPIcs.CCC.2020.22,
  author =	{Ilango, Rahul and Loff, Bruno and Oliveira, Igor C.},
  title =	{{NP-Hardness of Circuit Minimization for Multi-Output Functions}},
  booktitle =	{35th Computational Complexity Conference (CCC 2020)},
  pages =	{22:1--22:36},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-156-6},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{169},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2020.22},
  URN =		{urn:nbn:de:0030-drops-125744},
  doi =		{10.4230/LIPIcs.CCC.2020.22},
  annote =	{Keywords: MCSP, circuit minimization, communication complexity, Boolean circuit}
}
Document
Equality Alone Does not Simulate Randomness

Authors: Arkadev Chattopadhyay, Shachar Lovett, and Marc Vinyals

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
The canonical problem that gives an exponential separation between deterministic and randomized communication complexity in the classical two-party communication model is "Equality". In this work we show that even allowing access to an "Equality" oracle, deterministic protocols remain exponentially weaker than randomized ones. More precisely, we exhibit a total function on n bits with randomized one-sided communication complexity O(log n), but such that every deterministic protocol with access to "Equality" oracle needs Omega(n) cost to compute it. Additionally we exhibit a natural and strict infinite hierarchy within BPP, starting with the class P^{EQ} at its bottom.

Cite as

Arkadev Chattopadhyay, Shachar Lovett, and Marc Vinyals. Equality Alone Does not Simulate Randomness. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 14:1-14:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chattopadhyay_et_al:LIPIcs.CCC.2019.14,
  author =	{Chattopadhyay, Arkadev and Lovett, Shachar and Vinyals, Marc},
  title =	{{Equality Alone Does not Simulate Randomness}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{14:1--14:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.14},
  URN =		{urn:nbn:de:0030-drops-108368},
  doi =		{10.4230/LIPIcs.CCC.2019.14},
  annote =	{Keywords: Communication lower bound, derandomization}
}
Document
Lifting Theorems for Equality

Authors: Bruno Loff and Sagnik Mukhopadhyay

Published in: LIPIcs, Volume 126, 36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019)


Abstract
We show a deterministic simulation (or lifting) theorem for composed problems f o Eq_n where the inner function (the gadget) is Equality on n bits. When f is a total function on p bits, it is easy to show via a rank argument that the communication complexity of f o Eq_n is Omega(deg(f) * n). However, there is a surprising counter-example of a partial function f on p bits, such that any completion f' of f has deg(f') = Omega(p), and yet f o Eq_n has communication complexity O(n). Nonetheless, we are able to show that the communication complexity of f o Eq_n is at least D(f) * n for a complexity measure D(f) which is closely related to the AND-query complexity of f and is lower-bounded by the logarithm of the leaf complexity of f. As a corollary, we also obtain lifting theorems for the set-disjointness gadget, and a lifting theorem in the context of parity decision-trees, for the NOR gadget. As an application, we prove a tight lower-bound for the deterministic communication complexity of the communication problem, where Alice and Bob are each given p-many n-bit strings, with the promise that either all of the strings are distinct, or all-but-one of the strings are distinct, and they wish to know which is the case. We show that the complexity of this problem is Theta(p * n).

Cite as

Bruno Loff and Sagnik Mukhopadhyay. Lifting Theorems for Equality. In 36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 126, pp. 50:1-50:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{loff_et_al:LIPIcs.STACS.2019.50,
  author =	{Loff, Bruno and Mukhopadhyay, Sagnik},
  title =	{{Lifting Theorems for Equality}},
  booktitle =	{36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019)},
  pages =	{50:1--50:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-100-9},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{126},
  editor =	{Niedermeier, Rolf and Paul, Christophe},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2019.50},
  URN =		{urn:nbn:de:0030-drops-102892},
  doi =		{10.4230/LIPIcs.STACS.2019.50},
  annote =	{Keywords: Communication complexity, Query complexity, Simulation theorem, Equality function}
}
Document
Lower Bounds for Elimination via Weak Regularity

Authors: Arkadev Chattopadhyay, Pavel Dvorák, Michal Koucký, Bruno Loff, and Sagnik Mukhopadhyay

Published in: LIPIcs, Volume 66, 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017)


Abstract
We consider the problem of elimination in communication complexity, that was first raised by Ambainis et al. and later studied by Beimel et al. for its connection to the famous direct sum question. In this problem, let f: {0,1}^2n -> {0,1} be any boolean function. Alice and Bob get k inputs x_1, ..., x_k and y_1, ..., y_k respectively, with x_i,y_i in {0,1}^n. They want to output a k-bit vector v, such that there exists one index i for which v_i is not equal f(x_i,y_i). We prove a general result lower bounding the randomized communication complexity of the elimination problem for f using its discrepancy. Consequently, we obtain strong lower bounds for the functions Inner-Product and Greater-Than, that work for exponentially larger values of k than the best previous bounds. To prove our result, we use a pseudo-random notion called regularity that was first used by Raz and Wigderson. We show that functions with small discrepancy are regular. We also observe that a weaker notion, that we call weak-regularity, already implies hardness of elimination. Finally, we give a different proof, borrowing ideas from Viola, to show that Greater-Than is weakly regular.

Cite as

Arkadev Chattopadhyay, Pavel Dvorák, Michal Koucký, Bruno Loff, and Sagnik Mukhopadhyay. Lower Bounds for Elimination via Weak Regularity. In 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 66, pp. 21:1-21:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chattopadhyay_et_al:LIPIcs.STACS.2017.21,
  author =	{Chattopadhyay, Arkadev and Dvor\'{a}k, Pavel and Kouck\'{y}, Michal and Loff, Bruno and Mukhopadhyay, Sagnik},
  title =	{{Lower Bounds for Elimination via Weak Regularity}},
  booktitle =	{34th Symposium on Theoretical Aspects of Computer Science (STACS 2017)},
  pages =	{21:1--21:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-028-6},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{66},
  editor =	{Vollmer, Heribert and Vall\'{e}e, Brigitte},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2017.21},
  URN =		{urn:nbn:de:0030-drops-70128},
  doi =		{10.4230/LIPIcs.STACS.2017.21},
  annote =	{Keywords: communication complexity, elimination, discrepancy, regularity, greater-than}
}
Document
Catalytic Space: Non-determinism and Hierarchy

Authors: Harry Buhrman, Michal Koucký, Bruno Loff, and Florian Speelman

Published in: LIPIcs, Volume 47, 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)


Abstract
Catalytic computation, defined by Buhrman, Cleve, Koucký, Loff and Speelman (STOC 2014), is a space-bounded computation where in addition to our working memory we have an exponentially larger auxiliary memory which is full; the auxiliary memory may be used throughout the computation, but it must be restored to its initial content by the end of the computation. Motivated by the surprising power of this model, we set out to study the non-deterministic version of catalytic computation. We establish that non-deterministic catalytic log-space is contained in ZPP, which is the same bound known for its deterministic counterpart, and we prove that non-deterministic catalytic space is closed under complement (under a standard derandomization assumption). Furthermore, we establish hierarchy theorems for non-deterministic and deterministic catalytic computation.

Cite as

Harry Buhrman, Michal Koucký, Bruno Loff, and Florian Speelman. Catalytic Space: Non-determinism and Hierarchy. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 24:1-24:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{buhrman_et_al:LIPIcs.STACS.2016.24,
  author =	{Buhrman, Harry and Kouck\'{y}, Michal and Loff, Bruno and Speelman, Florian},
  title =	{{Catalytic Space: Non-determinism and Hierarchy}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{24:1--24:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.24},
  URN =		{urn:nbn:de:0030-drops-57258},
  doi =		{10.4230/LIPIcs.STACS.2016.24},
  annote =	{Keywords: catalytic computation, Immerman–Szelepcs\'{e}nyi theorem, space hierarchy}
}
  • Refine by Author
  • 8 Loff, Bruno
  • 3 Buhrman, Harry
  • 3 Speelman, Florian
  • 2 Chattopadhyay, Arkadev
  • 2 Ilango, Rahul
  • Show More...

  • Refine by Classification
  • 3 Theory of computation → Communication complexity
  • 2 Theory of computation → Computational complexity and cryptography
  • 2 Theory of computation → Oracles and decision trees
  • 2 Theory of computation → Problems, reductions and completeness
  • 2 Theory of computation → Quantum complexity theory
  • Show More...

  • Refine by Keyword
  • 2 communication complexity
  • 2 complexity theory
  • 2 data structures
  • 2 quantum walk
  • 1 3SUM
  • Show More...

  • Refine by Type
  • 10 document

  • Refine by Publication Year
  • 3 2022
  • 2 2019
  • 2 2020
  • 1 2016
  • 1 2017
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail