14 Search Results for "Umans, Christopher"


Document
Algebraic Methods in Computational Complexity (Dagstuhl Seminar 18391)

Authors: Markus Bläser, Valentine Kabanets, Jacobo Torán, and Christopher Umans

Published in: Dagstuhl Reports, Volume 8, Issue 9 (2019)


Abstract
Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test are some of the most prominent examples. In some of the most exciting recent progress in Computational Complexity the algebraic theme still plays a central role. There have been significant recent advances in algebraic circuit lower bounds, and the so-called chasm at depth 4 suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation). Also the areas of derandomization and coding theory have experimented important advances. The seminar aimed to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and the goal of the seminar was to play an important role in educating a diverse community about the latest new techniques.

Cite as

Markus Bläser, Valentine Kabanets, Jacobo Torán, and Christopher Umans. Algebraic Methods in Computational Complexity (Dagstuhl Seminar 18391). In Dagstuhl Reports, Volume 8, Issue 9, pp. 133-153, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@Article{blaser_et_al:DagRep.8.9.133,
  author =	{Bl\"{a}ser, Markus and Kabanets, Valentine and Tor\'{a}n, Jacobo and Umans, Christopher},
  title =	{{Algebraic Methods in Computational Complexity (Dagstuhl Seminar 18391)}},
  pages =	{133--153},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2019},
  volume =	{8},
  number =	{9},
  editor =	{Bl\"{a}ser, Markus and Kabanets, Valentine and Tor\'{a}n, Jacobo and Umans, Christopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.8.9.133},
  URN =		{urn:nbn:de:0030-drops-103438},
  doi =		{10.4230/DagRep.8.9.133},
  annote =	{Keywords: computational complexity, algebra, (de-) randomization, circuits, coding, lower bounds}
}
Document
Algebraic Methods in Computational Complexity (Dagstuhl Seminar 16411)

Authors: Valentine Kabanets, Thomas Thierauf, Jacobo Tóran, and Christopher Umans

Published in: Dagstuhl Reports, Volume 6, Issue 10 (2017)


Abstract
Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test are some of the most prominent examples. The algebraic theme continues in some of the most exciting recent progress in computational complexity. There have been significant recent advances in algebraic circuit lower bounds, and the so-called chasm at depth 4 suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation). Another surprising connection is that the algebraic techniques invented to show lower bounds now prove useful to develop efficient algorithms. For example, Williams showed how to use the polynomial method to obtain faster all-pair-shortest-path algorithms. This emphases once again the central role of algebra in computer science. The seminar aims to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and this seminar can play an important role in educating a diverse community about the latest new techniques, spurring further progress.

Cite as

Valentine Kabanets, Thomas Thierauf, Jacobo Tóran, and Christopher Umans. Algebraic Methods in Computational Complexity (Dagstuhl Seminar 16411). In Dagstuhl Reports, Volume 6, Issue 10, pp. 13-32, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@Article{kabanets_et_al:DagRep.6.10.13,
  author =	{Kabanets, Valentine and Thierauf, Thomas and T\'{o}ran, Jacobo and Umans, Christopher},
  title =	{{Algebraic Methods in Computational Complexity (Dagstuhl Seminar 16411)}},
  pages =	{13--32},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2017},
  volume =	{6},
  number =	{10},
  editor =	{Kabanets, Valentine and Thierauf, Thomas and T\'{o}ran, Jacobo and Umans, Christopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.6.10.13},
  URN =		{urn:nbn:de:0030-drops-69504},
  doi =		{10.4230/DagRep.6.10.13},
  annote =	{Keywords: Computational Complexity, lower bounds, approximation, pseudo-randomness, derandomization, circuits}
}
Document
On the Power of Quantum Fourier Sampling

Authors: Bill Fefferman and Christopher Umans

Published in: LIPIcs, Volume 61, 11th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2016)


Abstract
A line of work initiated by Terhal and DiVincenzo [Terhal/DiVincenzo, arXiv, 2002] and Bremner, Jozsa, and Shepherd [Bremner/Jozsa/Sheperd, Proc. Royal Soc. A, 2010], shows that restricted classes of quantum computation can efficiently sample from probability distributions that cannot be exactly sampled efficiently on a classical computer, unless the PH collapses. Aaronson and Arkhipov [Aaronson/Arkhipov, J. Theory of Comp., 2013] take this further by considering a distribution that can be sampled efficiently by linear optical quantum computation, that under two feasible conjectures, cannot even be approximately sampled within bounded total variation distance, unless the PH collapses. In this work we use Quantum Fourier Sampling to construct a class of distributions that can be sampled exactly by a quantum computer. We then argue that these distributions cannot be approximately sampled classically, unless the PH collapses, under variants of the Aaronson-Arkhipov conjectures. In particular, we show a general class of quantumly sampleable distributions each of which is based on an "Efficiently Specifiable" polynomial, for which a classical approximate sampler implies an average-case approximation. This class of polynomials contains the Permanent but also includes, for example, the Hamiltonian Cycle polynomial, as well as many other familiar #P-hard polynomials. Since our distribution likely requires the full power of universal quantum computation, while the Aaronson-Arkhipov distribution uses only linear optical quantum computation with noninteracting bosons, why is our result interesting? We can think of at least three reasons: 1. Since the conjectures required in [Aaronson/Arkhipov, J. Theory of Comp., 2013] have not yet been proven, it seems worthwhile to weaken them as much as possible. We do this in two ways, by weakening both conjectures to apply to any "Efficiently Specifiable" polynomial, and by weakening the so-called Anti-Concentration conjecture so that it need only hold for one distribution in a broad class of distributions. 2. Our construction can be understood without any knowledge of linear optics. While this may be a disadvantage for experimentalists, in our opinion it results in a very clean and simple exposition that may be more immediately accessible to computer scientists. 3. It is extremely common for quantum computations to employ “Quantum Fourier Sampling” in the following way: first apply a classically efficient function to a uniform superposition of inputs, then apply a Quantum Fourier Transform followed by a measurement. Our distributions are obtained in exactly this way, where the classically efficient function is related to a (presumed) hard polynomial. Establishing rigorously a robust sense in which the central primitive of Quantum Fourier Sampling is classically hard seems a worthwhile goal in itself.

Cite as

Bill Fefferman and Christopher Umans. On the Power of Quantum Fourier Sampling. In 11th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 61, pp. 1:1-1:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{fefferman_et_al:LIPIcs.TQC.2016.1,
  author =	{Fefferman, Bill and Umans, Christopher},
  title =	{{On the Power of Quantum Fourier Sampling}},
  booktitle =	{11th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2016)},
  pages =	{1:1--1:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-019-4},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{61},
  editor =	{Broadbent, Anne},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.TQC.2016.1},
  URN =		{urn:nbn:de:0030-drops-66829},
  doi =		{10.4230/LIPIcs.TQC.2016.1},
  annote =	{Keywords: Quantum Complexity Theory, Sampling Complexity}
}
Document
Algebra in Computational Complexity (Dagstuhl Seminar 14391)

Authors: Manindra Agrawal, Valentine Kabanets, Thomas Thierauf, and Christopher Umans

Published in: Dagstuhl Reports, Volume 4, Issue 9 (2015)


Abstract
At its core, much of Computational Complexity is concerned with combinatorial objects and structures. But it has often proven true that the best way to prove things about these combinatorial objects is by establishing a connection to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test are some of the most prominent examples. The algebraic theme continues in some of the most exciting recent progress in computational complexity. There have been significant recent advances in algebraic circuit lower bounds, and the so-called "chasm at depth 4" suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model, and these are tied to central questions regarding the power of randomness in computation. Representation theory has emerged as an important tool in three separate lines of work: the "Geometric Complexity Theory" approach to P vs. NP and circuit lower bounds, the effort to resolve the complexity of matrix multiplication, and a framework for constructing locally testable codes. Coding theory has seen several algebraic innovations in recent years, including multiplicity codes, and new lower bounds. This seminar brought together researchers who are using a diverse array of algebraic methods in a variety of settings. It plays an important role in educating a diverse community about the latest new techniques, spurring further progress.

Cite as

Manindra Agrawal, Valentine Kabanets, Thomas Thierauf, and Christopher Umans. Algebra in Computational Complexity (Dagstuhl Seminar 14391). In Dagstuhl Reports, Volume 4, Issue 9, pp. 85-105, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@Article{agrawal_et_al:DagRep.4.9.85,
  author =	{Agrawal, Manindra and Kabanets, Valentine and Thierauf, Thomas and Umans, Christopher},
  title =	{{Algebra in Computational Complexity (Dagstuhl Seminar 14391)}},
  pages =	{85--105},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2015},
  volume =	{4},
  number =	{9},
  editor =	{Agrawal, Manindra and Kabanets, Valentine and Thierauf, Thomas and Umans, Christopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.4.9.85},
  URN =		{urn:nbn:de:0030-drops-48851},
  doi =		{10.4230/DagRep.4.9.85},
  annote =	{Keywords: Computational Complexity, lower bounds, approximazation, pseudo-randomness, derandomization, circuits}
}
Document
Algebraic and Combinatorial Methods in Computational Complexity (Dagstuhl Seminar 12421)

Authors: Manindra Agrawal, Thomas Thierauf, and Christopher Umans

Published in: Dagstuhl Reports, Volume 2, Issue 10 (2013)


Abstract
At its core, much of Computational Complexity is concerned with combinatorial objects and structures. But it has often proven true that the best way to prove things about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The PCP characterization of NP and the Agrawal-Kayal-Saxena polynomial-time primality test are two prominent examples. Recently, there have been some works going in the opposite direction, giving alternative combinatorial proofs for results that were originally proved algebraically. These alternative proofs can yield important improvements because they are closer to the underlying problems and avoid the losses in passing to the algebraic setting. A prominent example is Dinur's proof of the PCP Theorem via gap amplification which yielded short PCPs with only a polylogarithmic length blowup (which had been the focus of significant research effort up to that point). We see here (and in a number of recent works) an exciting interplay between algebraic and combinatorial techniques. This seminar aims to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic and combinatorial methods in a variety of settings.

Cite as

Manindra Agrawal, Thomas Thierauf, and Christopher Umans. Algebraic and Combinatorial Methods in Computational Complexity (Dagstuhl Seminar 12421). In Dagstuhl Reports, Volume 2, Issue 10, pp. 60-78, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@Article{agrawal_et_al:DagRep.2.10.60,
  author =	{Agrawal, Manindra and Thierauf, Thomas and Umans, Christopher},
  title =	{{Algebraic and Combinatorial Methods in Computational Complexity (Dagstuhl Seminar 12421)}},
  pages =	{60--78},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2013},
  volume =	{2},
  number =	{10},
  editor =	{Agrawal, Manindra and Thierauf, Thomas and Umans, Christopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.2.10.60},
  URN =		{urn:nbn:de:0030-drops-39034},
  doi =		{10.4230/DagRep.2.10.60},
  annote =	{Keywords: Computational Complexity, lower bounds, approximazation, pseudo-randomness, derandomization, circuits}
}
Document
09421 Abstracts Collection – Algebraic Methods in Computational Complexity

Authors: Manindra Agrawal, Lance Fortnow, Thomas Thierauf, and Christopher Umans

Published in: Dagstuhl Seminar Proceedings, Volume 9421, Algebraic Methods in Computational Complexity (2010)


Abstract
From 11.10. to 16.10.2009, the Dagstuhl Seminar 09421 ``Algebraic Methods in Computational Complexity '' was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

Cite as

Manindra Agrawal, Lance Fortnow, Thomas Thierauf, and Christopher Umans. 09421 Abstracts Collection – Algebraic Methods in Computational Complexity. In Algebraic Methods in Computational Complexity. Dagstuhl Seminar Proceedings, Volume 9421, pp. 1-22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{agrawal_et_al:DagSemProc.09421.1,
  author =	{Agrawal, Manindra and Fortnow, Lance and Thierauf, Thomas and Umans, Christopher},
  title =	{{09421 Abstracts Collection – Algebraic Methods in Computational Complexity}},
  booktitle =	{Algebraic Methods in Computational Complexity},
  pages =	{1--22},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{9421},
  editor =	{Manindra Agrawal and Lance Fortnow and Thomas Thierauf and Christopher Umans},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09421.1},
  URN =		{urn:nbn:de:0030-drops-24181},
  doi =		{10.4230/DagSemProc.09421.1},
  annote =	{Keywords: Computational Complexity, Algebra}
}
Document
09421 Executive Summary – Algebraic Methods in Computational Complexity

Authors: Manindra Agrawal, Lance Fortnow, Thomas Thierauf, and Christopher Umans

Published in: Dagstuhl Seminar Proceedings, Volume 9421, Algebraic Methods in Computational Complexity (2010)


Abstract
The seminar brought together more than 50 researchers covering a wide spectrum of complexity theory. The focus on algebraic methods showed once again the great importance of algebraic techniques for theoretical computer science. We had almost 30 talks, most of them about 40 minutes leaving ample room for discussions. We also had a much appreciated open problem session. The talks ranged over a broad assortment of subjects with the underlying theme of using algebraic techniques. It was very fruitful and has hopefully initiated new directions in research. Several participants specifically mentioned that they appreciated the particular focus on a common class of techniques (rather than end results) as a unifying theme of the workshop. We look forward to our next meeting!

Cite as

Manindra Agrawal, Lance Fortnow, Thomas Thierauf, and Christopher Umans. 09421 Executive Summary – Algebraic Methods in Computational Complexity. In Algebraic Methods in Computational Complexity. Dagstuhl Seminar Proceedings, Volume 9421, pp. 1-4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{agrawal_et_al:DagSemProc.09421.2,
  author =	{Agrawal, Manindra and Fortnow, Lance and Thierauf, Thomas and Umans, Christopher},
  title =	{{09421 Executive Summary – Algebraic Methods in Computational Complexity}},
  booktitle =	{Algebraic Methods in Computational Complexity},
  pages =	{1--4},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{9421},
  editor =	{Manindra Agrawal and Lance Fortnow and Thomas Thierauf and Christopher Umans},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09421.2},
  URN =		{urn:nbn:de:0030-drops-24100},
  doi =		{10.4230/DagSemProc.09421.2},
  annote =	{Keywords: Computational Complexity, Algebra}
}
Document
An Axiomatic Approach to Algebrization

Authors: Russell Impagliazzo, Valentine Kabanets, and Antonina Kolokolova

Published in: Dagstuhl Seminar Proceedings, Volume 9421, Algebraic Methods in Computational Complexity (2010)


Abstract
Non-relativization of complexity issues can be interpreted as giving some evidence that these issues cannot be resolved by "black-box" techniques. In the early 1990's, a sequence of important non-relativizing results was proved, mainly using algebraic techniques. Two approaches have been proposed to understand the power and limitations of these algebraic techniques: (1) Fortnow gives a construction of a class of oracles which have a similar algebraic and logical structure, although they are arbitrarily powerful. He shows that many of the non-relativizing results proved using algebraic techniques hold for all such oracles, but he does not show, e.g., that the outcome of the "P vs. NP" question differs between different oracles in that class. (2) Aaronson and Wigderson give definitions of algebrizing separations and collapses of complexity classes, by comparing classes relative to one oracle to classes relative to an algebraic extension of that oracle. Using these definitions, they show both that the standard collapses and separations "algebrize" and that many of the open questions in complexity fail to "algebrize", suggesting that the arithmetization technique is close to its limits. However, it is unclear how to formalize algebrization of more complicated complexity statements than collapses or separations, and whether the algebrizing statements are, e.g., closed under modus ponens; so it is conceivable that several algebrizing premises could imply (in a relativizing way) a non-algebrizing conclusion. Here, building on the work of Arora, Impagliazzo, and Vazirani [4], we propose an axiomatic approach to "algebrization", which complements and clarifies the approaches of Fortnow and Aaronso&Wigderson. We present logical theories formalizing the notion of algebrizing techniques so that most algebrizing results are provable within our theories and separations requiring non-algebrizing techniques are independent of them. Our theories extend the [AIV] theory formalizing relativization by adding an Arithmetic Checkability axiom. We show the following: (i) Arithmetic checkability holds relative to arbitrarily powerful oracles (since Fortnow's algebraic oracles all satisfy Arithmetic Checkability axiom); by contrast, Local Checkability of [AIV] restricts the oracle power to NP cap co-NP. (ii) Most of the algebrizing collapses and separations from [AW], such as IP = PSPACE, NP subset ZKIP if one-way functions exist, MA-EXP not in P/poly, etc., are provable from Arithmetic Checkability. (iii) Many of the open complexity questions (shown to require nonalgebrizing techniques in [AW]), such as "P vs. NP", "NP vs. BPP", etc., cannot be proved from Arithmetic Checkability. (iv) Arithmetic Checkability is also insufficient to prove one known result, NEXP = MIP.

Cite as

Russell Impagliazzo, Valentine Kabanets, and Antonina Kolokolova. An Axiomatic Approach to Algebrization. In Algebraic Methods in Computational Complexity. Dagstuhl Seminar Proceedings, Volume 9421, pp. 1-19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{impagliazzo_et_al:DagSemProc.09421.3,
  author =	{Impagliazzo, Russell and Kabanets, Valentine and Kolokolova, Antonina},
  title =	{{An Axiomatic Approach to Algebrization}},
  booktitle =	{Algebraic Methods in Computational Complexity},
  pages =	{1--19},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{9421},
  editor =	{Manindra Agrawal and Lance Fortnow and Thomas Thierauf and Christopher Umans},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09421.3},
  URN =		{urn:nbn:de:0030-drops-24150},
  doi =		{10.4230/DagSemProc.09421.3},
  annote =	{Keywords: Oracles, arithmetization, algebrization}
}
Document
Deterministic approximation algorithms for the nearest codeword problem

Authors: Noga Alon, Rina Panigrahy, and Sergey Yekhanin

Published in: Dagstuhl Seminar Proceedings, Volume 9421, Algebraic Methods in Computational Complexity (2010)


Abstract
The Nearest Codeword Problem (NCP) is a basic algorithmic question in the theory of error-correcting codes. Given a point v in F_2^n and a linear space L in F_2^n of dimension k NCP asks to find a point l in L that minimizes the (Hamming) distance from v. It is well-known that the nearest codeword problem is NP-hard. Therefore approximation algorithms are of interest. The best effcient approximation algorithms for the NCP to date are due to Berman and Karpinski. They are a deterministic algorithm that achieves an approximation ratio of O(k/c) for an arbitrary constant c; and a randomized algorithm that achieves an approximation ratio of O(k/ log n). In this paper we present new deterministic algorithms for approximating the NCP that improve substantially upon the earlier work, (almost) de-randomizing the randomized algorithm of Berman and Karpinski. We also initiate a study of the following Remote Point Problem (RPP). Given a linear space L in F_2^n of dimension k RPP asks to find a point v in F_2^n that is far from L. We say that an algorithm achieves a remoteness of r for the RPP if it always outputs a point v that is at least r-far from L. In this paper we present a deterministic polynomial time algorithm that achieves a remoteness of Omega(n log k / k) for all k < n/2. We motivate the remote point problem by relating it to both the nearest codeword problem and the matrix rigidity approach to circuit lower bounds in computational complexity theory.

Cite as

Noga Alon, Rina Panigrahy, and Sergey Yekhanin. Deterministic approximation algorithms for the nearest codeword problem. In Algebraic Methods in Computational Complexity. Dagstuhl Seminar Proceedings, Volume 9421, pp. 1-13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{alon_et_al:DagSemProc.09421.4,
  author =	{Alon, Noga and Panigrahy, Rina and Yekhanin, Sergey},
  title =	{{Deterministic approximation algorithms for the nearest codeword problem}},
  booktitle =	{Algebraic Methods in Computational Complexity},
  pages =	{1--13},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{9421},
  editor =	{Manindra Agrawal and Lance Fortnow and Thomas Thierauf and Christopher Umans},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09421.4},
  URN =		{urn:nbn:de:0030-drops-24133},
  doi =		{10.4230/DagSemProc.09421.4},
  annote =	{Keywords: }
}
Document
Learning Parities in the Mistake-Bound model

Authors: Harry Buhrman, David Garcia-Soriano, and Arie Matsliah

Published in: Dagstuhl Seminar Proceedings, Volume 9421, Algebraic Methods in Computational Complexity (2010)


Abstract
We study the problem of learning parity functions that depend on at most $k$ variables ($k$-parities) attribute-efficiently in the mistake-bound model. We design a simple, deterministic, polynomial-time algorithm for learning $k$-parities with mistake bound $O(n^{1-frac{c}{k}})$, for any constant $c > 0$. This is the first polynomial-time algorithms that learns $omega(1)$-parities in the mistake-bound model with mistake bound $o(n)$. Using the standard conversion techniques from the mistake-bound model to the PAC model, our algorithm can also be used for learning $k$-parities in the PAC model. In particular, this implies a slight improvement on the results of Klivans and Servedio cite{rocco} for learning $k$-parities in the PAC model. We also show that the $widetilde{O}(n^{k/2})$ time algorithm from cite{rocco} that PAC-learns $k$-parities with optimal sample complexity can be extended to the mistake-bound model.

Cite as

Harry Buhrman, David Garcia-Soriano, and Arie Matsliah. Learning Parities in the Mistake-Bound model. In Algebraic Methods in Computational Complexity. Dagstuhl Seminar Proceedings, Volume 9421, pp. 1-9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{buhrman_et_al:DagSemProc.09421.5,
  author =	{Buhrman, Harry and Garcia-Soriano, David and Matsliah, Arie},
  title =	{{Learning Parities in the Mistake-Bound model}},
  booktitle =	{Algebraic Methods in Computational Complexity},
  pages =	{1--9},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{9421},
  editor =	{Manindra Agrawal and Lance Fortnow and Thomas Thierauf and Christopher Umans},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09421.5},
  URN =		{urn:nbn:de:0030-drops-24178},
  doi =		{10.4230/DagSemProc.09421.5},
  annote =	{Keywords: Attribute-efficient learning, parities, mistake-bound}
}
Document
Planar Graph Isomorphism is in Log-Space

Authors: Samir Datta, Nutan Limaye, Prajakta Nimbhorkar, Thomas Thierauf, and Fabian Wagner

Published in: Dagstuhl Seminar Proceedings, Volume 9421, Algebraic Methods in Computational Complexity (2010)


Abstract
Graph Isomorphism is the prime example of a computational problem with a wide difference between the best known lower and upper bounds on its complexity. There is a significant gap between extant lower and upper bounds for planar graphs as well. We bridge the gap for this natural and important special case by presenting an upper bound that matches the known log-space hardness [JKMT03]. In fact, we show the formally stronger result that planar graph canonization is in log-space. This improves the previously known upper bound of AC1 [MR91]. Our algorithm first constructs the biconnected component tree of a connected planar graph and then refines each biconnected component into a triconnected component tree. The next step is to log-space reduce the biconnected planar graph isomorphism and canonization problems to those for 3-connected planar graphs, which are known to be in log-space by [DLN08]. This is achieved by using the above decomposition, and by making significant modifications to Lindell’s algorithm for tree canonization, along with changes in the space complexity analysis. The reduction from the connected case to the biconnected case requires further new ideas including a non-trivial case analysis and a group theoretic lemma to bound the number of automorphisms of a colored 3-connected planar graph.

Cite as

Samir Datta, Nutan Limaye, Prajakta Nimbhorkar, Thomas Thierauf, and Fabian Wagner. Planar Graph Isomorphism is in Log-Space. In Algebraic Methods in Computational Complexity. Dagstuhl Seminar Proceedings, Volume 9421, pp. 1-32, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{datta_et_al:DagSemProc.09421.6,
  author =	{Datta, Samir and Limaye, Nutan and Nimbhorkar, Prajakta and Thierauf, Thomas and Wagner, Fabian},
  title =	{{Planar Graph Isomorphism is in Log-Space}},
  booktitle =	{Algebraic Methods in Computational Complexity},
  pages =	{1--32},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{9421},
  editor =	{Manindra Agrawal and Lance Fortnow and Thomas Thierauf and Christopher Umans},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09421.6},
  URN =		{urn:nbn:de:0030-drops-24169},
  doi =		{10.4230/DagSemProc.09421.6},
  annote =	{Keywords: Planar Graphs, Graph Isomorphism, Logspace}
}
Document
Small space analogues of Valiant's classes and the limitations of skew formula

Authors: Meena Mahajan and Raghavendra Rao B. V.

Published in: Dagstuhl Seminar Proceedings, Volume 9421, Algebraic Methods in Computational Complexity (2010)


Abstract
In the uniform circuit model of computation, the width of a boolean circuit exactly characterises the ``space'' complexity of the computed function. Looking for a similar relationship in Valiant's algebraic model of computation, we propose width of an arithmetic circuit as a possible measure of space. We introduce the class VL as an algebraic variant of deterministic log-space L. In the uniform setting, we show that our definition coincides with that of VPSPACE at polynomial width. Further, to define algebraic variants of non-deterministic space-bounded classes, we introduce the notion of ``read-once'' certificates for arithmetic circuits. We show that polynomial-size algebraic branching programs can be expressed as a read-once exponential sum over polynomials in VL, ie $mbox{VBP}inSigma^R cdotmbox{VL}$. We also show that $Sigma^R cdot mbox{VBP} =mbox{VBP}$, ie VBPs are stable under read-once exponential sums. Further, we show that read-once exponential sums over a restricted class of constant-width arithmetic circuits are within VQP, and this is the largest known such subclass of poly-log-width circuits with this property. We also study the power of skew formulas and show that exponential sums of a skew formula cannot represent the determinant polynomial.

Cite as

Meena Mahajan and Raghavendra Rao B. V.. Small space analogues of Valiant's classes and the limitations of skew formula. In Algebraic Methods in Computational Complexity. Dagstuhl Seminar Proceedings, Volume 9421, pp. 1-23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{mahajan_et_al:DagSemProc.09421.7,
  author =	{Mahajan, Meena and Rao B. V., Raghavendra},
  title =	{{Small space analogues of Valiant's classes and the limitations of   skew formula}},
  booktitle =	{Algebraic Methods in Computational Complexity},
  pages =	{1--23},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{9421},
  editor =	{Manindra Agrawal and Lance Fortnow and Thomas Thierauf and Christopher Umans},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09421.7},
  URN =		{urn:nbn:de:0030-drops-24126},
  doi =		{10.4230/DagSemProc.09421.7},
  annote =	{Keywords: Algebraic circuits, space bounds, circuit width, nondeterministic circuits, skew formulas}
}
Document
Unconditional Lower Bounds against Advice

Authors: Harry Buhrman, Lance Fortnow, and Rahul Santhanam

Published in: Dagstuhl Seminar Proceedings, Volume 9421, Algebraic Methods in Computational Complexity (2010)


Abstract
We show several unconditional lower bounds for exponential time classes against polynomial time classes with advice, including: (1) For any constant c, NEXP not in P^{NP[n^c]} (2) For any constant c, MAEXP not in MA/n^c (3) BPEXP not in BPP/n^{o(1)}. It was previously unknown even whether NEXP in NP/n^{0.01}. For the probabilistic classes, no lower bounds for uniform exponential time against advice were known before. We also consider the question of whether these lower bounds can be made to work on almost all input lengths rather than on infinitely many. We give an oracle relative to which NEXP in i.o.NP, which provides evidence that this is not possible with current techniques.

Cite as

Harry Buhrman, Lance Fortnow, and Rahul Santhanam. Unconditional Lower Bounds against Advice. In Algebraic Methods in Computational Complexity. Dagstuhl Seminar Proceedings, Volume 9421, pp. 1-11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{buhrman_et_al:DagSemProc.09421.8,
  author =	{Buhrman, Harry and Fortnow, Lance and Santhanam, Rahul},
  title =	{{Unconditional Lower Bounds against Advice}},
  booktitle =	{Algebraic Methods in Computational Complexity},
  pages =	{1--11},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{9421},
  editor =	{Manindra Agrawal and Lance Fortnow and Thomas Thierauf and Christopher Umans},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09421.8},
  URN =		{urn:nbn:de:0030-drops-24112},
  doi =		{10.4230/DagSemProc.09421.8},
  annote =	{Keywords: Advice, derandomization, diagonalization, lower bounds, semantic classes}
}
Document
Fast polynomial factorization and modular composition

Authors: Kiran Kedlaya and Christopher Umans

Published in: Dagstuhl Seminar Proceedings, Volume 8381, Computational Complexity of Discrete Problems (2008)


Abstract
We obtain randomized algorithms for factoring degree $n$ univariate polynomials over $F_q$ requiring $O(n^{1.5 + o(1)} log^{1+o(1)} q+ n^{1 + o(1)}log^{2+o(1)} q)$ bit operations. When $log q < n$, this is asymptotically faster than the best previous algorithms (von zur Gathen & Shoup (1992) and Kaltofen & Shoup (1998)); for $log q ge n$, it matches the asymptotic running time of the best known algorithms. The improvements come from new algorithms for modular composition of degree $n$ univariate polynomials, which is the asymptotic bottleneck in fast algorithms for factoring polynomials over finite fields. The best previous algorithms for modular composition use $O(n^{(omega + 1)/2})$ field operations, where $omega$ is the exponent of matrix multiplication (Brent & Kung (1978)), with a slight improvement in the exponent achieved by employing fast rectangular matrix multiplication (Huang & Pan (1997)). We show that modular composition and multipoint evaluation of multivariate polynomials are essentially equivalent, in the sense that an algorithm for one achieving exponent $alpha$ implies an algorithm for the other with exponent $alpha + o(1)$, and vice versa. We then give two new algorithms that solve the problem optimally (up to lower order terms): an algebraic algorithm for fields of characteristic at most $n^{o(1)}$, and a nonalgebraic algorithm that works in arbitrary characteristic. The latter algorithm works by lifting to characteristic 0, applying a small number of rounds of {em multimodular reduction}, and finishing with a small number of multidimensional FFTs. The final evaluations are reconstructed using the Chinese Remainder Theorem. As a bonus, this algorithm produces a very efficient data structure supporting polynomial evaluation queries, which is of independent interest. Our algorithms use techniques which are commonly employed in practice, so they may be competitive for real problem sizes. This contrasts with all previous subquadratic algorithsm for these problems, which rely on fast matrix multiplication. This is joint work with Kiran Kedlaya.

Cite as

Kiran Kedlaya and Christopher Umans. Fast polynomial factorization and modular composition. In Computational Complexity of Discrete Problems. Dagstuhl Seminar Proceedings, Volume 8381, pp. 1-33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{kedlaya_et_al:DagSemProc.08381.5,
  author =	{Kedlaya, Kiran and Umans, Christopher},
  title =	{{Fast polynomial factorization and modular composition}},
  booktitle =	{Computational Complexity of Discrete Problems},
  pages =	{1--33},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8381},
  editor =	{Peter Bro Miltersen and R\"{u}diger Reischuk and Georg Schnitger and Dieter van Melkebeek},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08381.5},
  URN =		{urn:nbn:de:0030-drops-17771},
  doi =		{10.4230/DagSemProc.08381.5},
  annote =	{Keywords: Modular composition; polynomial factorization; multipoint evaluation; Chinese Remaindering}
}
  • Refine by Author
  • 8 Umans, Christopher
  • 6 Thierauf, Thomas
  • 4 Agrawal, Manindra
  • 4 Kabanets, Valentine
  • 3 Fortnow, Lance
  • Show More...

  • Refine by Classification

  • Refine by Keyword
  • 5 Computational Complexity
  • 5 lower bounds
  • 4 circuits
  • 4 derandomization
  • 3 pseudo-randomness
  • Show More...

  • Refine by Type
  • 14 document

  • Refine by Publication Year
  • 8 2010
  • 1 2008
  • 1 2013
  • 1 2015
  • 1 2016
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail