Search Results

Documents authored by Grier, Daniel


Document
New Hardness Results for the Permanent Using Linear Optics

Authors: Daniel Grier and Luke Schaeffer

Published in: LIPIcs, Volume 102, 33rd Computational Complexity Conference (CCC 2018)


Abstract
In 2011, Aaronson gave a striking proof, based on quantum linear optics, that the problem of computing the permanent of a matrix is #P-hard. Aaronson's proof led naturally to hardness of approximation results for the permanent, and it was arguably simpler than Valiant's seminal proof of the same fact in 1979. Nevertheless, it did not show #P-hardness of the permanent for any class of matrices which was not previously known. In this paper, we present a collection of new results about matrix permanents that are derived primarily via these linear optical techniques. First, we show that the problem of computing the permanent of a real orthogonal matrix is #P-hard. Much like Aaronson's original proof, this implies that even a multiplicative approximation remains #P-hard to compute. The hardness result even translates to permanents of orthogonal matrices over the finite field F_{p^4} for p != 2, 3. Interestingly, this characterization is tight: in fields of characteristic 2, the permanent coincides with the determinant; in fields of characteristic 3, one can efficiently compute the permanent of an orthogonal matrix by a nontrivial result of Kogan. Finally, we use more elementary arguments to prove #P-hardness for the permanent of a positive semidefinite matrix. This result shows that certain probabilities of boson sampling experiments with thermal states are hard to compute exactly, despite the fact that they can be efficiently sampled by a classical computer.

Cite as

Daniel Grier and Luke Schaeffer. New Hardness Results for the Permanent Using Linear Optics. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 19:1-19:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{grier_et_al:LIPIcs.CCC.2018.19,
  author =	{Grier, Daniel and Schaeffer, Luke},
  title =	{{New Hardness Results for the Permanent Using Linear Optics}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{19:1--19:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.19},
  URN =		{urn:nbn:de:0030-drops-88702},
  doi =		{10.4230/LIPIcs.CCC.2018.19},
  annote =	{Keywords: Permanent, Linear optics, #P-hardness, Orthogonal matrices}
}
Document
The Classification of Reversible Bit Operations

Authors: Scott Aaronson, Daniel Grier, and Luke Schaeffer

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
We present a complete classification of all possible sets of classical reversible gates acting on bits, in terms of which reversible transformations they generate, assuming swaps and ancilla bits are available for free. Our classification can be seen as the reversible-computing analogue of Post's lattice, a central result in mathematical logic from the 1940s. It is a step toward the ambitious goal of classifying all possible quantum gate sets acting on qubits. Our theorem implies a linear-time algorithm (which we have implemented), that takes as input the truth tables of reversible gates G and H, and that decides whether G generates H. Previously, this problem was not even known to be decidable (though with effort, one can derive from abstract considerations an algorithm that takes triply-exponential time). The theorem also implies that any n-bit reversible circuit can be "compressed" to an equivalent circuit, over the same gates, that uses at most 2^{n}poly(n) gates and O(1) ancilla bits; these are the first upper bounds on these quantities known, and are close to optimal. Finally, the theorem implies that every non-degenerate reversible gate can implement either every reversible transformation, or every affine transformation, when restricted to an "encoded subspace." Briefly, the theorem says that every set of reversible gates generates either all reversible transformations on n-bit strings (as the Toffoli gate does); no transformations; all transformations that preserve Hamming weight (as the Fredkin gate does); all transformations that preserve Hamming weight mod k for some k; all affine transformations (as the Controlled-NOT gate does); all affine transformations that preserve Hamming weight mod 2 or mod 4, inner products mod 2, or a combination thereof; or a previous class augmented by a NOT or NOTNOT gate. Prior to this work, it was not even known that every class was finitely generated. Ruling out the possibility of additional classes, not in the list, requires involved arguments about polynomials, lattices, and Diophantine equations.

Cite as

Scott Aaronson, Daniel Grier, and Luke Schaeffer. The Classification of Reversible Bit Operations. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 23:1-23:34, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{aaronson_et_al:LIPIcs.ITCS.2017.23,
  author =	{Aaronson, Scott and Grier, Daniel and Schaeffer, Luke},
  title =	{{The Classification of Reversible Bit Operations}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{23:1--23:34},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.23},
  URN =		{urn:nbn:de:0030-drops-81737},
  doi =		{10.4230/LIPIcs.ITCS.2017.23},
  annote =	{Keywords: Reversible computation, Reversible gates, Circuit synthesis, Gate classification, Boolean logic, Post’s lattice}
}
Document
On the Complexity of Probabilistic Trials for Hidden Satisfiability Problems

Authors: Itai Arad, Adam Bouland, Daniel Grier, Miklos Santha, Aarthi Sundaram, and Shengyu Zhang

Published in: LIPIcs, Volume 58, 41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016)


Abstract
What is the minimum amount of information and time needed to solve 2SAT? When the instance is known, it can be solved in polynomial time, but is this also possible without knowing the instance? Bei, Chen and Zhang (STOC'13) considered a model where the input is accessed by proposing possible assignments to a special oracle. This oracle, on encountering some constraint unsatisfied by the proposal, returns only the constraint index. It turns out that, in this model, even 1SAT cannot be solved in polynomial time unless P=NP. Hence, we consider a model in which the input is accessed by proposing probability distributions over assignments to the variables. The oracle then returns the index of the constraint that is most likely to be violated by this distribution. We show that the information obtained this way is sufficient to solve 1SAT in polynomial time, even when the clauses can be repeated. For 2SAT, as long as there are no repeated clauses, in polynomial time we can even learn an equivalent formula for the hidden instance and hence also solve it. Furthermore, we extend these results to the quantum regime. We show that in this setting 1QSAT can be solved in polynomial time up to constant precision, and 2QSAT can be learnt in polynomial time up to inverse polynomial precision.

Cite as

Itai Arad, Adam Bouland, Daniel Grier, Miklos Santha, Aarthi Sundaram, and Shengyu Zhang. On the Complexity of Probabilistic Trials for Hidden Satisfiability Problems. In 41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 58, pp. 12:1-12:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{arad_et_al:LIPIcs.MFCS.2016.12,
  author =	{Arad, Itai and Bouland, Adam and Grier, Daniel and Santha, Miklos and Sundaram, Aarthi and Zhang, Shengyu},
  title =	{{On the Complexity of Probabilistic Trials for Hidden Satisfiability Problems}},
  booktitle =	{41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016)},
  pages =	{12:1--12:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-016-3},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{58},
  editor =	{Faliszewski, Piotr and Muscholl, Anca and Niedermeier, Rolf},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2016.12},
  URN =		{urn:nbn:de:0030-drops-64284},
  doi =		{10.4230/LIPIcs.MFCS.2016.12},
  annote =	{Keywords: computational complexity, satisfiability problems, trial and error, quantum computing, learning theory}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail