Search Results

Documents authored by Kothari, Pravesh


Found 2 Possible Name Variants:

Kothari, Pravesh K.

Document
Track A: Algorithms, Complexity and Games
Approximating Max-Cut on Bounded Degree Graphs: Tighter Analysis of the FKL Algorithm

Authors: Jun-Ting Hsieh and Pravesh K. Kothari

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
In this note, we describe a α_GW + Ω̃(1/d²)-factor approximation algorithm for Max-Cut on weighted graphs of degree ⩽ d. Here, α_GW ≈ 0.878 is the worst-case approximation ratio of the Goemans-Williamson rounding for Max-Cut. This improves on previous results for unweighted graphs by Feige, Karpinski, and Langberg [Feige et al., 2002] and Florén [Florén, 2016]. Our guarantee is obtained by a tighter analysis of the solution obtained by applying a natural local improvement procedure to the Goemans-Williamson rounding of the basic SDP strengthened with triangle inequalities.

Cite as

Jun-Ting Hsieh and Pravesh K. Kothari. Approximating Max-Cut on Bounded Degree Graphs: Tighter Analysis of the FKL Algorithm. In 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 261, pp. 77:1-77:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{hsieh_et_al:LIPIcs.ICALP.2023.77,
  author =	{Hsieh, Jun-Ting and Kothari, Pravesh K.},
  title =	{{Approximating Max-Cut on Bounded Degree Graphs: Tighter Analysis of the FKL Algorithm}},
  booktitle =	{50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)},
  pages =	{77:1--77:7},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-278-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{261},
  editor =	{Etessami, Kousha and Feige, Uriel and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2023.77},
  URN =		{urn:nbn:de:0030-drops-181291},
  doi =		{10.4230/LIPIcs.ICALP.2023.77},
  annote =	{Keywords: Max-Cut, approximation algorithm, semidefinite programming}
}
Document
Track A: Algorithms, Complexity and Games
Ellipsoid Fitting up to a Constant

Authors: Jun-Ting Hsieh, Pravesh K. Kothari, Aaron Potechin, and Jeff Xu

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
In [Saunderson, 2011; Saunderson et al., 2013], Saunderson, Parrilo, and Willsky asked the following elegant geometric question: what is the largest m = m(d) such that there is an ellipsoid in ℝ^d that passes through v_1, v_2, …, v_m with high probability when the v_is are chosen independently from the standard Gaussian distribution N(0,I_d)? The existence of such an ellipsoid is equivalent to the existence of a positive semidefinite matrix X such that v_i^⊤ X v_i = 1 for every 1 ⩽ i ⩽ m - a natural example of a random semidefinite program. SPW conjectured that m = (1-o(1)) d²/4 with high probability. Very recently, Potechin, Turner, Venkat and Wein [Potechin et al., 2022] and Kane and Diakonikolas [Kane and Diakonikolas, 2022] proved that m ≳ d²/log^O(1) d via a certain natural, explicit construction. In this work, we give a substantially tighter analysis of their construction to prove that m ≳ d²/C for an absolute constant C > 0. This resolves one direction of the SPW conjecture up to a constant. Our analysis proceeds via the method of Graphical Matrix Decomposition that has recently been used to analyze correlated random matrices arising in various areas [Barak et al., 2019; Bafna et al., 2022]. Our key new technical tool is a refined method to prove singular value upper bounds on certain correlated random matrices that are tight up to absolute dimension-independent constants. In contrast, all previous methods that analyze such matrices lose logarithmic factors in the dimension.

Cite as

Jun-Ting Hsieh, Pravesh K. Kothari, Aaron Potechin, and Jeff Xu. Ellipsoid Fitting up to a Constant. In 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 261, pp. 78:1-78:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{hsieh_et_al:LIPIcs.ICALP.2023.78,
  author =	{Hsieh, Jun-Ting and Kothari, Pravesh K. and Potechin, Aaron and Xu, Jeff},
  title =	{{Ellipsoid Fitting up to a Constant}},
  booktitle =	{50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)},
  pages =	{78:1--78:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-278-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{261},
  editor =	{Etessami, Kousha and Feige, Uriel and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2023.78},
  URN =		{urn:nbn:de:0030-drops-181304},
  doi =		{10.4230/LIPIcs.ICALP.2023.78},
  annote =	{Keywords: Semidefinite programming, random matrices, average-case complexity}
}
Document
APPROX
Bypassing the XOR Trick: Stronger Certificates for Hypergraph Clique Number

Authors: Venkatesan Guruswami, Pravesh K. Kothari, and Peter Manohar

Published in: LIPIcs, Volume 245, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)


Abstract
Let H(k,n,p) be the distribution on k-uniform hypergraphs where every subset of [n] of size k is included as an hyperedge with probability p independently. In this work, we design and analyze a simple spectral algorithm that certifies a bound on the size of the largest clique, ω(H), in hypergraphs H ∼ H(k,n,p). For example, for any constant p, with high probability over the choice of the hypergraph, our spectral algorithm certifies a bound of Õ(√n) on the clique number in polynomial time. This matches, up to polylog(n) factors, the best known certificate for the clique number in random graphs, which is the special case of k = 2. Prior to our work, the best known refutation algorithms [Amin Coja-Oghlan et al., 2004; Sarah R. Allen et al., 2015] rely on a reduction to the problem of refuting random k-XOR via Feige’s XOR trick [Uriel Feige, 2002], and yield a polynomially worse bound of Õ(n^{3/4}) on the clique number when p = O(1). Our algorithm bypasses the XOR trick and relies instead on a natural generalization of the Lovász theta semidefinite programming relaxation for cliques in hypergraphs.

Cite as

Venkatesan Guruswami, Pravesh K. Kothari, and Peter Manohar. Bypassing the XOR Trick: Stronger Certificates for Hypergraph Clique Number. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 245, pp. 42:1-42:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.APPROX/RANDOM.2022.42,
  author =	{Guruswami, Venkatesan and Kothari, Pravesh K. and Manohar, Peter},
  title =	{{Bypassing the XOR Trick: Stronger Certificates for Hypergraph Clique Number}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)},
  pages =	{42:1--42:7},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-249-5},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{245},
  editor =	{Chakrabarti, Amit and Swamy, Chaitanya},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2022.42},
  URN =		{urn:nbn:de:0030-drops-171642},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2022.42},
  annote =	{Keywords: Planted clique, Average-case complexity, Spectral refutation, Random matrix theory}
}
Document
RANDOM
Memory-Sample Lower Bounds for Learning Parity with Noise

Authors: Sumegha Garg, Pravesh K. Kothari, Pengda Liu, and Ran Raz

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
In this work, we show, for the well-studied problem of learning parity under noise, where a learner tries to learn x = (x₁,…,x_n) ∈ {0,1}ⁿ from a stream of random linear equations over 𝔽₂ that are correct with probability 1/2+ε and flipped with probability 1/2-ε (0 < ε < 1/2), that any learning algorithm requires either a memory of size Ω(n²/ε) or an exponential number of samples. In fact, we study memory-sample lower bounds for a large class of learning problems, as characterized by [Garg et al., 2018], when the samples are noisy. A matrix M: A × X → {-1,1} corresponds to the following learning problem with error parameter ε: an unknown element x ∈ X is chosen uniformly at random. A learner tries to learn x from a stream of samples, (a₁, b₁), (a₂, b₂) …, where for every i, a_i ∈ A is chosen uniformly at random and b_i = M(a_i,x) with probability 1/2+ε and b_i = -M(a_i,x) with probability 1/2-ε (0 < ε < 1/2). Assume that k,𝓁, r are such that any submatrix of M of at least 2^{-k} ⋅ |A| rows and at least 2^{-𝓁} ⋅ |X| columns, has a bias of at most 2^{-r}. We show that any learning algorithm for the learning problem corresponding to M, with error parameter ε, requires either a memory of size at least Ω((k⋅𝓁)/ε), or at least 2^{Ω(r)} samples. The result holds even if the learner has an exponentially small success probability (of 2^{-Ω(r)}). In particular, this shows that for a large class of learning problems, same as those in [Garg et al., 2018], any learning algorithm requires either a memory of size at least Ω(((log|X|)⋅(log|A|))/ε) or an exponential number of noisy samples. Our proof is based on adapting the arguments in [Ran Raz, 2017; Garg et al., 2018] to the noisy case.

Cite as

Sumegha Garg, Pravesh K. Kothari, Pengda Liu, and Ran Raz. Memory-Sample Lower Bounds for Learning Parity with Noise. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 60:1-60:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.APPROX/RANDOM.2021.60,
  author =	{Garg, Sumegha and Kothari, Pravesh K. and Liu, Pengda and Raz, Ran},
  title =	{{Memory-Sample Lower Bounds for Learning Parity with Noise}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{60:1--60:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.60},
  URN =		{urn:nbn:de:0030-drops-147534},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.60},
  annote =	{Keywords: memory-sample tradeoffs, learning parity under noise, space lower bound, branching program}
}
Document
A Stress-Free Sum-Of-Squares Lower Bound for Coloring

Authors: Pravesh K. Kothari and Peter Manohar

Published in: LIPIcs, Volume 200, 36th Computational Complexity Conference (CCC 2021)


Abstract
We prove that with high probability over the choice of a random graph G from the Erdős-Rényi distribution G(n, 1/2), a natural n^{O(ε² log n)}-time, degree O(ε² log n) sum-of-squares semidefinite program cannot refute the existence of a valid k-coloring of G for k = n^{1/2 + ε}. Our result implies that the refutation guarantee of the basic semidefinite program (a close variant of the Lovász theta function) cannot be appreciably improved by a natural o(log n)-degree sum-of-squares strengthening, and this is tight up to a n^{o(1)} slack in k. To the best of our knowledge, this is the first lower bound for coloring G(n, 1/2) for even a single round strengthening of the basic SDP in any SDP hierarchy. Our proof relies on a new variant of instance-preserving non-pointwise complete reduction within SoS from coloring a graph to finding large independent sets in it. Our proof is (perhaps surprisingly) short, simple and does not require complicated spectral norm bounds on random matrices with dependent entries that have been otherwise necessary in the proofs of many similar results [Boaz Barak et al., 2016; S. B. {Hopkins} et al., 2017; Dmitriy Kunisky and Afonso S. Bandeira, 2019; Mrinalkanti Ghosh et al., 2020; Mohanty et al., 2020]. Our result formally holds for a constraint system where vertices are allowed to belong to multiple color classes; we leave the extension to the formally stronger formulation of coloring, where vertices must belong to unique colors classes, as an outstanding open problem.

Cite as

Pravesh K. Kothari and Peter Manohar. A Stress-Free Sum-Of-Squares Lower Bound for Coloring. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 23:1-23:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kothari_et_al:LIPIcs.CCC.2021.23,
  author =	{Kothari, Pravesh K. and Manohar, Peter},
  title =	{{A Stress-Free Sum-Of-Squares Lower Bound for Coloring}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{23:1--23:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.23},
  URN =		{urn:nbn:de:0030-drops-142978},
  doi =		{10.4230/LIPIcs.CCC.2021.23},
  annote =	{Keywords: Sum-of-Squares, Graph Coloring, Independent Set, Lower Bounds}
}
Document
RANDOM
Time-Space Tradeoffs for Distinguishing Distributions and Applications to Security of Goldreich’s PRG

Authors: Sumegha Garg, Pravesh K. Kothari, and Ran Raz

Published in: LIPIcs, Volume 176, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)


Abstract
In this work, we establish lower-bounds against memory bounded algorithms for distinguishing between natural pairs of related distributions from samples that arrive in a streaming setting. Our first result applies to the problem of distinguishing the uniform distribution on {0,1}ⁿ from uniform distribution on some unknown linear subspace of {0,1}ⁿ. As a specific corollary, we show that any algorithm that distinguishes between uniform distribution on {0,1}ⁿ and uniform distribution on an n/2-dimensional linear subspace of {0,1}ⁿ with non-negligible advantage needs 2^Ω(n) samples or Ω(n²) memory (tight up to constants in the exponent). Our second result applies to distinguishing outputs of Goldreich’s local pseudorandom generator from the uniform distribution on the output domain. Specifically, Goldreich’s pseudorandom generator G fixes a predicate P:{0,1}^k → {0,1} and a collection of subsets S₁, S₂, …, S_m ⊆ [n] of size k. For any seed x ∈ {0,1}ⁿ, it outputs P(x_S₁), P(x_S₂), …, P(x_{S_m}) where x_{S_i} is the projection of x to the coordinates in S_i. We prove that whenever P is t-resilient (all non-zero Fourier coefficients of (-1)^P are of degree t or higher), then no algorithm, with < n^ε memory, can distinguish the output of G from the uniform distribution on {0,1}^m with a large inverse polynomial advantage, for stretch m ≤ (n/t) ^{(1-ε)/36 ⋅ t} (barring some restrictions on k). The lower bound holds in the streaming model where at each time step i, S_i ⊆ [n] is a randomly chosen (ordered) subset of size k and the distinguisher sees either P(x_{S_i}) or a uniformly random bit along with S_i. An important implication of our second result is the security of Goldreich’s generator with super linear stretch (in the streaming model), against memory-bounded adversaries, whenever the predicate P satisfies the necessary condition of t-resiliency identified in various prior works. Our proof builds on the recently developed machinery for proving time-space trade-offs (Raz 2016 and follow-ups). Our key technical contribution is to adapt this machinery to work for distinguishing problems in contrast to prior works on similar results for search/learning problems.

Cite as

Sumegha Garg, Pravesh K. Kothari, and Ran Raz. Time-Space Tradeoffs for Distinguishing Distributions and Applications to Security of Goldreich’s PRG. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 176, pp. 21:1-21:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.APPROX/RANDOM.2020.21,
  author =	{Garg, Sumegha and Kothari, Pravesh K. and Raz, Ran},
  title =	{{Time-Space Tradeoffs for Distinguishing Distributions and Applications to Security of Goldreich’s PRG}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)},
  pages =	{21:1--21:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-164-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{176},
  editor =	{Byrka, Jaros{\l}aw and Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2020.21},
  URN =		{urn:nbn:de:0030-drops-126248},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2020.21},
  annote =	{Keywords: memory-sample tradeoffs, bounded storage cryptography, Goldreich’s local PRG, distinguishing problems, refuting CSPs}
}
Document
Small-Set Expansion in Shortcode Graph and the 2-to-2 Conjecture

Authors: Boaz Barak, Pravesh K. Kothari, and David Steurer

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
Dinur, Khot, Kindler, Minzer and Safra (2016) recently showed that the (imperfect completeness variant of) Khot's 2 to 2 games conjecture follows from a combinatorial hypothesis about the soundness of a certain "Grassmanian agreement tester". In this work, we show that soundness of Grassmannian agreement tester follows from a conjecture we call the "Shortcode Expansion Hypothesis" characterizing the non-expanding sets of the degree-two Short code graph. We also show the latter conjecture is equivalent to a characterization of the non-expanding sets in the Grassman graph, as hypothesized by a follow-up paper of Dinur et al. (2017). Following our work, Khot, Minzer and Safra (2018) proved the "Shortcode Expansion Hypothesis". Combining their proof with our result and the reduction of Dinur et al. (2016), completes the proof of the 2 to 2 conjecture with imperfect completeness. We believe that the Shortcode graph provides a useful view of both the hypothesis and the reduction, and might be suitable for obtaining new hardness reductions.

Cite as

Boaz Barak, Pravesh K. Kothari, and David Steurer. Small-Set Expansion in Shortcode Graph and the 2-to-2 Conjecture. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 9:1-9:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{barak_et_al:LIPIcs.ITCS.2019.9,
  author =	{Barak, Boaz and Kothari, Pravesh K. and Steurer, David},
  title =	{{Small-Set Expansion in Shortcode Graph and the 2-to-2 Conjecture}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{9:1--9:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.9},
  URN =		{urn:nbn:de:0030-drops-101022},
  doi =		{10.4230/LIPIcs.ITCS.2019.9},
  annote =	{Keywords: Unique Games Conjecture, Small-Set Expansion, Grassmann Graph, Shortcode}
}
Document
SOS Lower Bounds with Hard Constraints: Think Global, Act Local

Authors: Pravesh K. Kothari, Ryan O'Donnell, and Tselil Schramm

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
Many previous Sum-of-Squares (SOS) lower bounds for CSPs had two deficiencies related to global constraints. First, they were not able to support a "cardinality constraint", as in, say, the Min-Bisection problem. Second, while the pseudoexpectation of the objective function was shown to have some value beta, it did not necessarily actually "satisfy" the constraint "objective = beta". In this paper we show how to remedy both deficiencies in the case of random CSPs, by translating global constraints into local constraints. Using these ideas, we also show that degree-Omega(sqrt{n}) SOS does not provide a (4/3 - epsilon)-approximation for Min-Bisection, and degree-Omega(n) SOS does not provide a (11/12 + epsilon)-approximation for Max-Bisection or a (5/4 - epsilon)-approximation for Min-Bisection. No prior SOS lower bounds for these problems were known.

Cite as

Pravesh K. Kothari, Ryan O'Donnell, and Tselil Schramm. SOS Lower Bounds with Hard Constraints: Think Global, Act Local. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 49:1-49:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{kothari_et_al:LIPIcs.ITCS.2019.49,
  author =	{Kothari, Pravesh K. and O'Donnell, Ryan and Schramm, Tselil},
  title =	{{SOS Lower Bounds with Hard Constraints: Think Global, Act Local}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{49:1--49:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.49},
  URN =		{urn:nbn:de:0030-drops-101420},
  doi =		{10.4230/LIPIcs.ITCS.2019.49},
  annote =	{Keywords: sum-of-squares hierarchy, random constraint satisfaction problems}
}
Document
Improper Learning by Refuting

Authors: Pravesh K. Kothari and Roi Livni

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
The sample complexity of learning a Boolean-valued function class is precisely characterized by its Rademacher complexity. This has little bearing, however, on the sample complexity of efficient agnostic learning. We introduce refutation complexity, a natural computational analog of Rademacher complexity of a Boolean concept class and show that it exactly characterizes the sample complexity of efficient agnostic learning. Informally, refutation complexity of a class C is the minimum number of example-label pairs required to efficiently distinguish between the case that the labels correlate with the evaluation of some member of C (structure) and the case where the labels are i.i.d. Rademacher random variables (noise). The easy direction of this relationship was implicitly used in the recent framework for improper PAC learning lower bounds of Daniely and co-authors via connections to the hardness of refuting random constraint satisfaction problems. Our work can be seen as making the relationship between agnostic learning and refutation implicit in their work into an explicit equivalence. In a recent, independent work, Salil Vadhan discovered a similar relationship between refutation and PAC-learning in the realizable (i.e. noiseless) case.

Cite as

Pravesh K. Kothari and Roi Livni. Improper Learning by Refuting. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 55:1-55:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kothari_et_al:LIPIcs.ITCS.2018.55,
  author =	{Kothari, Pravesh K. and Livni, Roi},
  title =	{{Improper Learning by Refuting}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{55:1--55:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.55},
  URN =		{urn:nbn:de:0030-drops-83488},
  doi =		{10.4230/LIPIcs.ITCS.2018.55},
  annote =	{Keywords: learning thoery, computation learning}
}

Kothari, Pravesh

Document
Embedding Hard Learning Problems Into Gaussian Space

Authors: Adam Klivans and Pravesh Kothari

Published in: LIPIcs, Volume 28, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)


Abstract
We give the first representation-independent hardness result for agnostically learning halfspaces with respect to the Gaussian distribution. We reduce from the problem of learning sparse parities with noise with respect to the uniform distribution on the hypercube (sparse LPN), a notoriously hard problem in theoretical computer science and show that any algorithm for agnostically learning halfspaces requires n^Omega(log(1/\epsilon)) time under the assumption that k-sparse LPN requires n^Omega(k) time, ruling out a polynomial time algorithm for the problem. As far as we are aware, this is the first representation-independent hardness result for supervised learning when the underlying distribution is restricted to be a Gaussian. We also show that the problem of agnostically learning sparse polynomials with respect to the Gaussian distribution in polynomial time is as hard as PAC learning DNFs on the uniform distribution in polynomial time. This complements the surprising result of Andoni et. al. 2013 who show that sparse polynomials are learnable under random Gaussian noise in polynomial time. Taken together, these results show the inherent difficulty of designing supervised learning algorithms in Euclidean space even in the presence of strong distributional assumptions. Our results use a novel embedding of random labeled examples from the uniform distribution on the Boolean hypercube into random labeled examples from the Gaussian distribution that allows us to relate the hardness of learning problems on two different domains and distributions.

Cite as

Adam Klivans and Pravesh Kothari. Embedding Hard Learning Problems Into Gaussian Space. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 793-809, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{klivans_et_al:LIPIcs.APPROX-RANDOM.2014.793,
  author =	{Klivans, Adam and Kothari, Pravesh},
  title =	{{Embedding Hard Learning Problems Into Gaussian Space}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{793--809},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.793},
  URN =		{urn:nbn:de:0030-drops-47391},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.793},
  annote =	{Keywords: distribution-specific hardness of learning, gaussian space, halfspace-learning, agnostic learning}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail