25 Search Results for "Umans, Christopher"


Document
Non-Malleable Extractors and Non-Malleable Codes: Partially Optimal Constructions

Authors: Xin Li

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
The recent line of study on randomness extractors has been a great success, resulting in exciting new techniques, new connections, and breakthroughs to long standing open problems in several seemingly different topics. These include seeded non-malleable extractors, privacy amplification protocols with an active adversary, independent source extractors (and explicit Ramsey graphs), and non-malleable codes in the split state model. Previously, the best constructions are given in [Xin Li, 2017]: seeded non-malleable extractors with seed length and entropy requirement O(log n+log(1/epsilon)log log (1/epsilon)) for error epsilon; two-round privacy amplification protocols with optimal entropy loss for security parameter up to Omega(k/log k), where k is the entropy of the shared weak source; two-source extractors for entropy O(log n log log n); and non-malleable codes in the 2-split state model with rate Omega(1/log n). However, in all cases there is still a gap to optimum and the motivation to close this gap remains strong. In this paper, we introduce a set of new techniques to further push the frontier in the above questions. Our techniques lead to improvements in all of the above questions, and in several cases partially optimal constructions. This is in contrast to all previous work, which only obtain close to optimal constructions. Specifically, we obtain: 1) A seeded non-malleable extractor with seed length O(log n)+log^{1+o(1)}(1/epsilon) and entropy requirement O(log log n+log(1/epsilon)), where the entropy requirement is asymptotically optimal by a recent result of Gur and Shinkar [Tom Gur and Igor Shinkar, 2018]; 2) A two-round privacy amplification protocol with optimal entropy loss for security parameter up to Omega(k), which solves the privacy amplification problem completely; 3) A two-source extractor for entropy O((log n log log n)/(log log log n)), which also gives an explicit Ramsey graph on N vertices with no clique or independent set of size (log N)^{O((log log log N)/(log log log log N))}; and 4) The first explicit non-malleable code in the 2-split state model with constant rate, which has been a major goal in the study of non-malleable codes for quite some time. One small caveat is that the error of this code is only (an arbitrarily small) constant, but we can also achieve negligible error with rate Omega(log log log n/log log n), which already improves the rate in [Xin Li, 2017] exponentially. We believe our new techniques can help to eventually obtain completely optimal constructions in the above questions, and may have applications in other settings.

Cite as

Xin Li. Non-Malleable Extractors and Non-Malleable Codes: Partially Optimal Constructions. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 28:1-28:49, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{li:LIPIcs.CCC.2019.28,
  author =	{Li, Xin},
  title =	{{Non-Malleable Extractors and Non-Malleable Codes: Partially Optimal Constructions}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{28:1--28:49},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.28},
  URN =		{urn:nbn:de:0030-drops-108507},
  doi =		{10.4230/LIPIcs.CCC.2019.28},
  annote =	{Keywords: extractor, non-malleable, privacy, codes}
}
Document
Barriers for Fast Matrix Multiplication from Irreversibility

Authors: Matthias Christandl, Péter Vrana, and Jeroen Zuiddam

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
Determining the asymptotic algebraic complexity of matrix multiplication, succinctly represented by the matrix multiplication exponent omega, is a central problem in algebraic complexity theory. The best upper bounds on omega, leading to the state-of-the-art omega <= 2.37.., have been obtained via the laser method of Strassen and its generalization by Coppersmith and Winograd. Recent barrier results show limitations for these and related approaches to improve the upper bound on omega. We introduce a new and more general barrier, providing stronger limitations than in previous work. Concretely, we introduce the notion of "irreversibility" of a tensor and we prove (in some precise sense) that any approach that uses an irreversible tensor in an intermediate step (e.g., as a starting tensor in the laser method) cannot give omega = 2. In quantitative terms, we prove that the best upper bound achievable is lower bounded by two times the irreversibility of the intermediate tensor. The quantum functionals and Strassen support functionals give (so far, the best) lower bounds on irreversibility. We provide lower bounds on the irreversibility of key intermediate tensors, including the small and big Coppersmith - Winograd tensors, that improve limitations shown in previous work. Finally, we discuss barriers on the group-theoretic approach in terms of "monomial" irreversibility.

Cite as

Matthias Christandl, Péter Vrana, and Jeroen Zuiddam. Barriers for Fast Matrix Multiplication from Irreversibility. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 26:1-26:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{christandl_et_al:LIPIcs.CCC.2019.26,
  author =	{Christandl, Matthias and Vrana, P\'{e}ter and Zuiddam, Jeroen},
  title =	{{Barriers for Fast Matrix Multiplication from Irreversibility}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{26:1--26:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.26},
  URN =		{urn:nbn:de:0030-drops-108487},
  doi =		{10.4230/LIPIcs.CCC.2019.26},
  annote =	{Keywords: Matrix multiplication exponent, barriers, laser method}
}
Document
Near-Optimal Pseudorandom Generators for Constant-Depth Read-Once Formulas

Authors: Dean Doron, Pooya Hatami, and William M. Hoza

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
We give an explicit pseudorandom generator (PRG) for read-once AC^0, i.e., constant-depth read-once formulas over the basis {wedge, vee, neg} with unbounded fan-in. The seed length of our PRG is O~(log(n/epsilon)). Previously, PRGs with near-optimal seed length were known only for the depth-2 case [Gopalan et al., 2012]. For a constant depth d > 2, the best prior PRG is a recent construction by Forbes and Kelley with seed length O~(log^2 n + log n log(1/epsilon)) for the more general model of constant-width read-once branching programs with arbitrary variable order [Michael A. Forbes and Zander Kelley, 2018]. Looking beyond read-once AC^0, we also show that our PRG fools read-once AC^0[oplus] with seed length O~(t + log(n/epsilon)), where t is the number of parity gates in the formula. Our construction follows Ajtai and Wigderson’s approach of iterated pseudorandom restrictions [Ajtai and Wigderson, 1989]. We assume by recursion that we already have a PRG for depth-d AC^0 formulas. To fool depth-(d + 1) AC^0 formulas, we use the given PRG, combined with a small-bias distribution and almost k-wise independence, to sample a pseudorandom restriction. The analysis of Forbes and Kelley [Michael A. Forbes and Zander Kelley, 2018] shows that our restriction approximately preserves the expectation of the formula. The crux of our work is showing that after poly(log log n) independent applications of our pseudorandom restriction, the formula simplifies in the sense that every gate other than the output has only polylog n remaining children. Finally, as the last step, we use a recent PRG by Meka, Reingold, and Tal [Meka et al., 2019] to fool this simpler formula.

Cite as

Dean Doron, Pooya Hatami, and William M. Hoza. Near-Optimal Pseudorandom Generators for Constant-Depth Read-Once Formulas. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 16:1-16:34, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{doron_et_al:LIPIcs.CCC.2019.16,
  author =	{Doron, Dean and Hatami, Pooya and Hoza, William M.},
  title =	{{Near-Optimal Pseudorandom Generators for Constant-Depth Read-Once Formulas}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{16:1--16:34},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.16},
  URN =		{urn:nbn:de:0030-drops-108382},
  doi =		{10.4230/LIPIcs.CCC.2019.16},
  annote =	{Keywords: Pseudorandom generators, Constant-depth formulas, Explicit constructions}
}
Document
Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems

Authors: Lijie Chen, Dylan M. McKay, Cody D. Murray, and R. Ryan Williams

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
A frontier open problem in circuit complexity is to prove P^{NP} is not in SIZE[n^k] for all k; this is a necessary intermediate step towards NP is not in P_{/poly}. Previously, for several classes containing P^{NP}, including NP^{NP}, ZPP^{NP}, and S_2 P, such lower bounds have been proved via Karp-Lipton-style Theorems: to prove C is not in SIZE[n^k] for all k, we show that C subset P_{/poly} implies a "collapse" D = C for some larger class D, where we already know D is not in SIZE[n^k] for all k. It seems obvious that one could take a different approach to prove circuit lower bounds for P^{NP} that does not require proving any Karp-Lipton-style theorems along the way. We show this intuition is wrong: (weak) Karp-Lipton-style theorems for P^{NP} are equivalent to fixed-polynomial size circuit lower bounds for P^{NP}. That is, P^{NP} is not in SIZE[n^k] for all k if and only if (NP subset P_{/poly} implies PH subset i.o.- P^{NP}_{/n}). Next, we present new consequences of the assumption NP subset P_{/poly}, towards proving similar results for NP circuit lower bounds. We show that under the assumption, fixed-polynomial circuit lower bounds for NP, nondeterministic polynomial-time derandomizations, and various fixed-polynomial time simulations of NP are all equivalent. Applying this equivalence, we show that circuit lower bounds for NP imply better Karp-Lipton collapses. That is, if NP is not in SIZE[n^k] for all k, then for all C in {Parity-P, PP, PSPACE, EXP}, C subset P_{/poly} implies C subset i.o.-NP_{/n^epsilon} for all epsilon > 0. Note that unconditionally, the collapses are only to MA and not NP. We also explore consequences of circuit lower bounds for a sparse language in NP. Among other results, we show if a polynomially-sparse NP language does not have n^{1+epsilon}-size circuits, then MA subset i.o.-NP_{/O(log n)}, MA subset i.o.-P^{NP[O(log n)]}, and NEXP is not in SIZE[2^{o(m)}]. Finally, we observe connections between these results and the "hardness magnification" phenomena described in recent works.

Cite as

Lijie Chen, Dylan M. McKay, Cody D. Murray, and R. Ryan Williams. Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 30:1-30:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.CCC.2019.30,
  author =	{Chen, Lijie and McKay, Dylan M. and Murray, Cody D. and Williams, R. Ryan},
  title =	{{Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{30:1--30:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.30},
  URN =		{urn:nbn:de:0030-drops-108525},
  doi =		{10.4230/LIPIcs.CCC.2019.30},
  annote =	{Keywords: Karp-Lipton Theorems, Circuit Lower Bounds, Derandomization, Hardness Magnification}
}
Document
Typically-Correct Derandomization for Small Time and Space

Authors: William M. Hoza

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
Suppose a language L can be decided by a bounded-error randomized algorithm that runs in space S and time n * poly(S). We give a randomized algorithm for L that still runs in space O(S) and time n * poly(S) that uses only O(S) random bits; our algorithm has a low failure probability on all but a negligible fraction of inputs of each length. As an immediate corollary, there is a deterministic algorithm for L that runs in space O(S) and succeeds on all but a negligible fraction of inputs of each length. We also give several other complexity-theoretic applications of our technique.

Cite as

William M. Hoza. Typically-Correct Derandomization for Small Time and Space. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 9:1-9:39, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{hoza:LIPIcs.CCC.2019.9,
  author =	{Hoza, William M.},
  title =	{{Typically-Correct Derandomization for Small Time and Space}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{9:1--9:39},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.9},
  URN =		{urn:nbn:de:0030-drops-108317},
  doi =		{10.4230/LIPIcs.CCC.2019.9},
  annote =	{Keywords: Derandomization, pseudorandomness, space complexity}
}
Document
From DNF Compression to Sunflower Theorems via Regularity

Authors: Shachar Lovett, Noam Solomon, and Jiapeng Zhang

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
The sunflower conjecture is one of the most well-known open problems in combinatorics. It has several applications in theoretical computer science, one of which is DNF compression, due to Gopalan, Meka and Reingold (Computational Complexity, 2013). In this paper, we show that improved bounds for DNF compression imply improved bounds for the sunflower conjecture, which is the reverse direction of the DNF compression result. The main approach is based on regularity of set systems and a structure-vs-pseudorandomness approach to the sunflower conjecture.

Cite as

Shachar Lovett, Noam Solomon, and Jiapeng Zhang. From DNF Compression to Sunflower Theorems via Regularity. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 5:1-5:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{lovett_et_al:LIPIcs.CCC.2019.5,
  author =	{Lovett, Shachar and Solomon, Noam and Zhang, Jiapeng},
  title =	{{From DNF Compression to Sunflower Theorems via Regularity}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{5:1--5:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.5},
  URN =		{urn:nbn:de:0030-drops-108277},
  doi =		{10.4230/LIPIcs.CCC.2019.5},
  annote =	{Keywords: DNF sparsification, sunflower conjecture, regular set systems}
}
Document
Limits on the Universal Method for Matrix Multiplication

Authors: Josh Alman

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
In this work, we prove limitations on the known methods for designing matrix multiplication algorithms. Alman and Vassilevska Williams [Alman and Williams, 2018] recently defined the Universal Method, which substantially generalizes all the known approaches including Strassen’s Laser Method [V. Strassen, 1987] and Cohn and Umans' Group Theoretic Method [Cohn and Umans, 2003]. We prove concrete lower bounds on the algorithms one can design by applying the Universal Method to many different tensors. Our proofs use new tools for upper bounding the asymptotic slice rank of a wide range of tensors. Our main result is that the Universal method applied to any Coppersmith-Winograd tensor CW_q cannot yield a bound on omega, the exponent of matrix multiplication, better than 2.16805. By comparison, it was previously only known that the weaker "Galactic Method" applied to CW_q could not achieve an exponent of 2. We also study the Laser Method (which is, in principle, a highly special case of the Universal Method) and prove that it is "complete" for matrix multiplication algorithms: when it applies to a tensor T, it achieves omega = 2 if and only if it is possible for the Universal method applied to T to achieve omega = 2. Hence, the Laser Method, which was originally used as an algorithmic tool, can also be seen as a lower bounding tool. For example, in their landmark paper, Coppersmith and Winograd [Coppersmith and Winograd, 1990] achieved a bound of omega <= 2.376, by applying the Laser Method to CW_q. By our result, the fact that they did not achieve omega=2 implies a lower bound on the Universal Method applied to CW_q. Indeed, if it were possible for the Universal Method applied to CW_q to achieve omega=2, then Coppersmith and Winograd’s application of the Laser Method would have achieved omega=2.

Cite as

Josh Alman. Limits on the Universal Method for Matrix Multiplication. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 12:1-12:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{alman:LIPIcs.CCC.2019.12,
  author =	{Alman, Josh},
  title =	{{Limits on the Universal Method for Matrix Multiplication}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{12:1--12:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.12},
  URN =		{urn:nbn:de:0030-drops-108347},
  doi =		{10.4230/LIPIcs.CCC.2019.12},
  annote =	{Keywords: Matrix Multiplication, Laser Method, Slice Rank}
}
Document
Algebraic Methods in Computational Complexity (Dagstuhl Seminar 18391)

Authors: Markus Bläser, Valentine Kabanets, Jacobo Torán, and Christopher Umans

Published in: Dagstuhl Reports, Volume 8, Issue 9 (2019)


Abstract
Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test are some of the most prominent examples. In some of the most exciting recent progress in Computational Complexity the algebraic theme still plays a central role. There have been significant recent advances in algebraic circuit lower bounds, and the so-called chasm at depth 4 suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation). Also the areas of derandomization and coding theory have experimented important advances. The seminar aimed to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and the goal of the seminar was to play an important role in educating a diverse community about the latest new techniques.

Cite as

Markus Bläser, Valentine Kabanets, Jacobo Torán, and Christopher Umans. Algebraic Methods in Computational Complexity (Dagstuhl Seminar 18391). In Dagstuhl Reports, Volume 8, Issue 9, pp. 133-153, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@Article{blaser_et_al:DagRep.8.9.133,
  author =	{Bl\"{a}ser, Markus and Kabanets, Valentine and Tor\'{a}n, Jacobo and Umans, Christopher},
  title =	{{Algebraic Methods in Computational Complexity (Dagstuhl Seminar 18391)}},
  pages =	{133--153},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2019},
  volume =	{8},
  number =	{9},
  editor =	{Bl\"{a}ser, Markus and Kabanets, Valentine and Tor\'{a}n, Jacobo and Umans, Christopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.8.9.133},
  URN =		{urn:nbn:de:0030-drops-103438},
  doi =		{10.4230/DagRep.8.9.133},
  annote =	{Keywords: computational complexity, algebra, (de-) randomization, circuits, coding, lower bounds}
}
Document
Algebraic Methods in Computational Complexity (Dagstuhl Seminar 16411)

Authors: Valentine Kabanets, Thomas Thierauf, Jacobo Tóran, and Christopher Umans

Published in: Dagstuhl Reports, Volume 6, Issue 10 (2017)


Abstract
Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test are some of the most prominent examples. The algebraic theme continues in some of the most exciting recent progress in computational complexity. There have been significant recent advances in algebraic circuit lower bounds, and the so-called chasm at depth 4 suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation). Another surprising connection is that the algebraic techniques invented to show lower bounds now prove useful to develop efficient algorithms. For example, Williams showed how to use the polynomial method to obtain faster all-pair-shortest-path algorithms. This emphases once again the central role of algebra in computer science. The seminar aims to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and this seminar can play an important role in educating a diverse community about the latest new techniques, spurring further progress.

Cite as

Valentine Kabanets, Thomas Thierauf, Jacobo Tóran, and Christopher Umans. Algebraic Methods in Computational Complexity (Dagstuhl Seminar 16411). In Dagstuhl Reports, Volume 6, Issue 10, pp. 13-32, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@Article{kabanets_et_al:DagRep.6.10.13,
  author =	{Kabanets, Valentine and Thierauf, Thomas and T\'{o}ran, Jacobo and Umans, Christopher},
  title =	{{Algebraic Methods in Computational Complexity (Dagstuhl Seminar 16411)}},
  pages =	{13--32},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2017},
  volume =	{6},
  number =	{10},
  editor =	{Kabanets, Valentine and Thierauf, Thomas and T\'{o}ran, Jacobo and Umans, Christopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.6.10.13},
  URN =		{urn:nbn:de:0030-drops-69504},
  doi =		{10.4230/DagRep.6.10.13},
  annote =	{Keywords: Computational Complexity, lower bounds, approximation, pseudo-randomness, derandomization, circuits}
}
Document
On the Power of Quantum Fourier Sampling

Authors: Bill Fefferman and Christopher Umans

Published in: LIPIcs, Volume 61, 11th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2016)


Abstract
A line of work initiated by Terhal and DiVincenzo [Terhal/DiVincenzo, arXiv, 2002] and Bremner, Jozsa, and Shepherd [Bremner/Jozsa/Sheperd, Proc. Royal Soc. A, 2010], shows that restricted classes of quantum computation can efficiently sample from probability distributions that cannot be exactly sampled efficiently on a classical computer, unless the PH collapses. Aaronson and Arkhipov [Aaronson/Arkhipov, J. Theory of Comp., 2013] take this further by considering a distribution that can be sampled efficiently by linear optical quantum computation, that under two feasible conjectures, cannot even be approximately sampled within bounded total variation distance, unless the PH collapses. In this work we use Quantum Fourier Sampling to construct a class of distributions that can be sampled exactly by a quantum computer. We then argue that these distributions cannot be approximately sampled classically, unless the PH collapses, under variants of the Aaronson-Arkhipov conjectures. In particular, we show a general class of quantumly sampleable distributions each of which is based on an "Efficiently Specifiable" polynomial, for which a classical approximate sampler implies an average-case approximation. This class of polynomials contains the Permanent but also includes, for example, the Hamiltonian Cycle polynomial, as well as many other familiar #P-hard polynomials. Since our distribution likely requires the full power of universal quantum computation, while the Aaronson-Arkhipov distribution uses only linear optical quantum computation with noninteracting bosons, why is our result interesting? We can think of at least three reasons: 1. Since the conjectures required in [Aaronson/Arkhipov, J. Theory of Comp., 2013] have not yet been proven, it seems worthwhile to weaken them as much as possible. We do this in two ways, by weakening both conjectures to apply to any "Efficiently Specifiable" polynomial, and by weakening the so-called Anti-Concentration conjecture so that it need only hold for one distribution in a broad class of distributions. 2. Our construction can be understood without any knowledge of linear optics. While this may be a disadvantage for experimentalists, in our opinion it results in a very clean and simple exposition that may be more immediately accessible to computer scientists. 3. It is extremely common for quantum computations to employ “Quantum Fourier Sampling” in the following way: first apply a classically efficient function to a uniform superposition of inputs, then apply a Quantum Fourier Transform followed by a measurement. Our distributions are obtained in exactly this way, where the classically efficient function is related to a (presumed) hard polynomial. Establishing rigorously a robust sense in which the central primitive of Quantum Fourier Sampling is classically hard seems a worthwhile goal in itself.

Cite as

Bill Fefferman and Christopher Umans. On the Power of Quantum Fourier Sampling. In 11th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 61, pp. 1:1-1:19, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{fefferman_et_al:LIPIcs.TQC.2016.1,
  author =	{Fefferman, Bill and Umans, Christopher},
  title =	{{On the Power of Quantum Fourier Sampling}},
  booktitle =	{11th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2016)},
  pages =	{1:1--1:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-019-4},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{61},
  editor =	{Broadbent, Anne},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TQC.2016.1},
  URN =		{urn:nbn:de:0030-drops-66829},
  doi =		{10.4230/LIPIcs.TQC.2016.1},
  annote =	{Keywords: Quantum Complexity Theory, Sampling Complexity}
}
Document
Algebra in Computational Complexity (Dagstuhl Seminar 14391)

Authors: Manindra Agrawal, Valentine Kabanets, Thomas Thierauf, and Christopher Umans

Published in: Dagstuhl Reports, Volume 4, Issue 9 (2015)


Abstract
At its core, much of Computational Complexity is concerned with combinatorial objects and structures. But it has often proven true that the best way to prove things about these combinatorial objects is by establishing a connection to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test are some of the most prominent examples. The algebraic theme continues in some of the most exciting recent progress in computational complexity. There have been significant recent advances in algebraic circuit lower bounds, and the so-called "chasm at depth 4" suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model, and these are tied to central questions regarding the power of randomness in computation. Representation theory has emerged as an important tool in three separate lines of work: the "Geometric Complexity Theory" approach to P vs. NP and circuit lower bounds, the effort to resolve the complexity of matrix multiplication, and a framework for constructing locally testable codes. Coding theory has seen several algebraic innovations in recent years, including multiplicity codes, and new lower bounds. This seminar brought together researchers who are using a diverse array of algebraic methods in a variety of settings. It plays an important role in educating a diverse community about the latest new techniques, spurring further progress.

Cite as

Manindra Agrawal, Valentine Kabanets, Thomas Thierauf, and Christopher Umans. Algebra in Computational Complexity (Dagstuhl Seminar 14391). In Dagstuhl Reports, Volume 4, Issue 9, pp. 85-105, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@Article{agrawal_et_al:DagRep.4.9.85,
  author =	{Agrawal, Manindra and Kabanets, Valentine and Thierauf, Thomas and Umans, Christopher},
  title =	{{Algebra in Computational Complexity (Dagstuhl Seminar 14391)}},
  pages =	{85--105},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2015},
  volume =	{4},
  number =	{9},
  editor =	{Agrawal, Manindra and Kabanets, Valentine and Thierauf, Thomas and Umans, Christopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.4.9.85},
  URN =		{urn:nbn:de:0030-drops-48851},
  doi =		{10.4230/DagRep.4.9.85},
  annote =	{Keywords: Computational Complexity, lower bounds, approximazation, pseudo-randomness, derandomization, circuits}
}
Document
One Time-traveling Bit is as Good as Logarithmically Many

Authors: Ryan O'Donnell and A. C. Cem Say

Published in: LIPIcs, Volume 29, 34th International Conference on Foundation of Software Technology and Theoretical Computer Science (FSTTCS 2014)


Abstract
We consider computation in the presence of closed timelike curves (CTCs), as proposed by Deutsch. We focus on the case in which the CTCs carry classical bits (as opposed to qubits). Previously, Aaronson and Watrous showed that computation with polynomially many CTC bits is equivalent in power to PSPACE. On the other hand, Say and Yakaryilmaz showed that computation with just 1 classical CTC bit gives the power of "postselection", thereby upgrading classical randomized computation (BPP) to the complexity class BPP_path and standard quantum computation (BQP) to the complexity class PP. It is natural to ask whether increasing the number of CTC bits from 1 to 2 (or 3, 4, etc.) leads to increased computational power. We show that the answer is no: randomized computation with logarithmically many CTC bits (i.e., polynomially many CTC states) is equivalent to BPP_path. (Similarly, quantum computation augmented with logarithmically many classical CTC bits is equivalent to PP.) Spoilsports with no interest in time travel may view our results as concerning the robustness of the class BPP_path and the computational complexity of sampling from an implicitly defined Markov chain.

Cite as

Ryan O'Donnell and A. C. Cem Say. One Time-traveling Bit is as Good as Logarithmically Many. In 34th International Conference on Foundation of Software Technology and Theoretical Computer Science (FSTTCS 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 29, pp. 469-480, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{odonnell_et_al:LIPIcs.FSTTCS.2014.469,
  author =	{O'Donnell, Ryan and Cem Say, A. C.},
  title =	{{One Time-traveling Bit is as Good as Logarithmically Many}},
  booktitle =	{34th International Conference on Foundation of Software Technology and Theoretical Computer Science (FSTTCS 2014)},
  pages =	{469--480},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-77-4},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{29},
  editor =	{Raman, Venkatesh and Suresh, S. P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2014.469},
  URN =		{urn:nbn:de:0030-drops-48646},
  doi =		{10.4230/LIPIcs.FSTTCS.2014.469},
  annote =	{Keywords: Time travel, postselection, Markov chains, randomized computation}
}
Document
Circuit Obfuscation Using Braids

Authors: Gorjan Alagic, Stacey Jeffery, and Stephen Jordan

Published in: LIPIcs, Volume 27, 9th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2014)


Abstract
An obfuscator is an algorithm that translates circuits into functionally-equivalent similarly-sized circuits that are hard to understand. Efficient obfuscators would have many applications in cryptography. Until recently, theoretical progress has mainly been limited to no-go results. Recent works have proposed the first efficient obfuscation algorithms for classical logic circuits, based on a notion of indistinguishability against polynomial-time adversaries. In this work, we propose a new notion of obfuscation, which we call partial-indistinguishability. This notion is based on computationally universal groups with efficiently computable normal forms, and appears to be incomparable with existing definitions. We describe universal gate sets for both classical and quantum computation, in which our definition of obfuscation can be met by polynomial-time algorithms. We also discuss some potential applications to testing quantum computers. We stress that the cryptographic security of these obfuscators, especially when composed with translation from other gate sets, remains an open question.

Cite as

Gorjan Alagic, Stacey Jeffery, and Stephen Jordan. Circuit Obfuscation Using Braids. In 9th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 27, pp. 141-160, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{alagic_et_al:LIPIcs.TQC.2014.141,
  author =	{Alagic, Gorjan and Jeffery, Stacey and Jordan, Stephen},
  title =	{{Circuit Obfuscation Using Braids}},
  booktitle =	{9th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2014)},
  pages =	{141--160},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-73-6},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{27},
  editor =	{Flammia, Steven T. and Harrow, Aram W.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TQC.2014.141},
  URN =		{urn:nbn:de:0030-drops-48135},
  doi =		{10.4230/LIPIcs.TQC.2014.141},
  annote =	{Keywords: obfuscation, cryptography, universality, quantum}
}
Document
Improved Lower Bounds for Testing Triangle-freeness in Boolean Functions via Fast Matrix Multiplication

Authors: Hu Fu and Robert Kleinberg

Published in: LIPIcs, Volume 28, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)


Abstract
Understanding the query complexity for testing linear-invariant properties has been a central open problem in the study of algebraic property testing. Triangle-freeness in Boolean functions is a simple property whose testing complexity is unknown. Three Boolean functions f1, f2 and f3, mapping {0,1}^k to {0,1}, are said to be triangle free if there is no x, y in {0,1}^k such that f1(x) = f2(y) = f3(x + y) = 1. This property is known to be strongly testable (Green 2005), but the number of queries needed is upper-bounded only by a tower of twos whose height is polynomial in 1 / epsislon, where epsislon is the distance between the tested function triple and triangle-freeness, i.e., the minimum fraction of function values that need to be modified to make the triple triangle free. A lower bound of (1 / epsilon)^2.423 for any one-sided tester was given by Bhattacharyya and Xie (2010). In this work we improve this bound to (1 / epsilon)^6.619. Interestingly, we prove this by way of a combinatorial construction called uniquely solvable puzzles that was at the heart of Coppersmith and Winograd's renowned matrix multiplication algorithm.

Cite as

Hu Fu and Robert Kleinberg. Improved Lower Bounds for Testing Triangle-freeness in Boolean Functions via Fast Matrix Multiplication. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 669-676, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{fu_et_al:LIPIcs.APPROX-RANDOM.2014.669,
  author =	{Fu, Hu and Kleinberg, Robert},
  title =	{{Improved Lower Bounds for Testing Triangle-freeness in Boolean Functions via Fast Matrix Multiplication}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{669--676},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.669},
  URN =		{urn:nbn:de:0030-drops-47304},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.669},
  annote =	{Keywords: Property testing, linear invariance, fast matrix multiplication, uniquely solvable puzzles}
}
Document
Deterministic Coupon Collection and Better Strong Dispersers

Authors: Raghu Meka, Omer Reingold, and Yuan Zhou

Published in: LIPIcs, Volume 28, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)


Abstract
Hashing is one of the main techniques in data processing and algorithm design for very large data sets. While random hash functions satisfy most desirable properties, it is often too expensive to store a fully random hash function. Motivated by this, much attention has been given to designing small families of hash functions suitable for various applications. In this work, we study the question of designing space-efficient hash families H = {h:[U] -> [N]} with the natural property of 'covering': H is said to be covering if any set of Omega(N log N) distinct items from the universe (the "coupon-collector limit") are hashed to cover all N bins by most hash functions in H. We give an explicit covering family H of size poly(N) (which is optimal), so that hash functions in H can be specified efficiently by O(log N) bits. We build covering hash functions by drawing a connection to "dispersers", which are quite well-studied and have a variety of applications themselves. We in fact need strong dispersers and we give new constructions of strong dispersers which may be of independent interest. Specifically, we construct strong dispersers with optimal entropy loss in the high min-entropy, but very small error (poly(n)/2^n for n bit sources) regimes. We also provide a strong disperser construction with constant error but for any min-entropy. Our constructions achieve these by using part of the source to replace seed from previous non-strong constructions in surprising ways. In doing so, we take two of the few constructions of dispersers with parameters better than known extractors and make them strong.

Cite as

Raghu Meka, Omer Reingold, and Yuan Zhou. Deterministic Coupon Collection and Better Strong Dispersers. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 872-884, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{meka_et_al:LIPIcs.APPROX-RANDOM.2014.872,
  author =	{Meka, Raghu and Reingold, Omer and Zhou, Yuan},
  title =	{{Deterministic Coupon Collection and Better Strong Dispersers}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{872--884},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.872},
  URN =		{urn:nbn:de:0030-drops-47440},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.872},
  annote =	{Keywords: Coupon collection; dispersers, strong dispersers, hashing, pseudorandomness}
}
  • Refine by Author
  • 8 Umans, Christopher
  • 6 Thierauf, Thomas
  • 4 Agrawal, Manindra
  • 4 Kabanets, Valentine
  • 3 Fortnow, Lance
  • Show More...

  • Refine by Classification
  • 2 Theory of computation → Algebraic complexity theory
  • 2 Theory of computation → Pseudorandomness and derandomization
  • 1 Theory of computation → Circuit complexity
  • 1 Theory of computation → Complexity classes
  • 1 Theory of computation → Design and analysis of algorithms
  • Show More...

  • Refine by Keyword
  • 5 Computational Complexity
  • 5 lower bounds
  • 4 circuits
  • 4 derandomization
  • 3 pseudo-randomness
  • Show More...

  • Refine by Type
  • 25 document

  • Refine by Publication Year
  • 8 2010
  • 8 2019
  • 4 2014
  • 1 2008
  • 1 2013
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail