2 Search Results for "Daniely, Amit"


Document
Tight Hardness Results for Training Depth-2 ReLU Networks

Authors: Surbhi Goel, Adam Klivans, Pasin Manurangsi, and Daniel Reichman

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
We prove several hardness results for training depth-2 neural networks with the ReLU activation function; these networks are simply weighted sums (that may include negative coefficients) of ReLUs. Our goal is to output a depth-2 neural network that minimizes the square loss with respect to a given training set. We prove that this problem is NP-hard already for a network with a single ReLU. We also prove NP-hardness for outputting a weighted sum of k ReLUs minimizing the squared error (for k > 1) even in the realizable setting (i.e., when the labels are consistent with an unknown depth-2 ReLU network). We are also able to obtain lower bounds on the running time in terms of the desired additive error ε. To obtain our lower bounds, we use the Gap Exponential Time Hypothesis (Gap-ETH) as well as a new hypothesis regarding the hardness of approximating the well known Densest κ-Subgraph problem in subexponential time (these hypotheses are used separately in proving different lower bounds). For example, we prove that under reasonable hardness assumptions, any proper learning algorithm for finding the best fitting ReLU must run in time exponential in 1/ε². Together with a previous work regarding improperly learning a ReLU [Surbhi Goel et al., 2017], this implies the first separation between proper and improper algorithms for learning a ReLU. We also study the problem of properly learning a depth-2 network of ReLUs with bounded weights giving new (worst-case) upper bounds on the running time needed to learn such networks both in the realizable and agnostic settings. Our upper bounds on the running time essentially matches our lower bounds in terms of the dependency on ε.

Cite as

Surbhi Goel, Adam Klivans, Pasin Manurangsi, and Daniel Reichman. Tight Hardness Results for Training Depth-2 ReLU Networks. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 22:1-22:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{goel_et_al:LIPIcs.ITCS.2021.22,
  author =	{Goel, Surbhi and Klivans, Adam and Manurangsi, Pasin and Reichman, Daniel},
  title =	{{Tight Hardness Results for Training Depth-2 ReLU Networks}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{22:1--22:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.22},
  URN =		{urn:nbn:de:0030-drops-135611},
  doi =		{10.4230/LIPIcs.ITCS.2021.22},
  annote =	{Keywords: ReLU, Learning Algorithm, Running Time Lower Bound}
}
Document
On the practically interesting instances of MAXCUT

Authors: Yonatan Bilu, Amit Daniely, Nati Linial, and Michael Saks

Published in: LIPIcs, Volume 20, 30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013)


Abstract
For many optimization problems, the instances of practical interest often occupy just a tiny part of the algorithm's space of instances. Following (Y. Bilu and N. Linial, 2010), we apply this perspective to MAXCUT, viewed as a clustering problem. Using a variety of techniques, we investigate practically interesting instances of this problem. Specifically, we show how to solve in polynomial time distinguished, metric, expanding and dense instances of MAXCUT under mild stability assumptions. In particular, (1 + epsilon)-stability (which is optimal) suffices for metric and dense MAXCUT. We also show how to solve in polynomial time Omega(sqrt(n))-stable instances of MAXCUT, substantially improving the best previously known result.

Cite as

Yonatan Bilu, Amit Daniely, Nati Linial, and Michael Saks. On the practically interesting instances of MAXCUT. In 30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013). Leibniz International Proceedings in Informatics (LIPIcs), Volume 20, pp. 526-537, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{bilu_et_al:LIPIcs.STACS.2013.526,
  author =	{Bilu, Yonatan and Daniely, Amit and Linial, Nati and Saks, Michael},
  title =	{{On the practically interesting instances of MAXCUT}},
  booktitle =	{30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013)},
  pages =	{526--537},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-50-7},
  ISSN =	{1868-8969},
  year =	{2013},
  volume =	{20},
  editor =	{Portier, Natacha and Wilke, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2013.526},
  URN =		{urn:nbn:de:0030-drops-39625},
  doi =		{10.4230/LIPIcs.STACS.2013.526},
  annote =	{Keywords: MAXCUT, Clustering, Hardness in practice, Stability, Non worst-case analysis}
}
  • Refine by Author
  • 1 Bilu, Yonatan
  • 1 Daniely, Amit
  • 1 Goel, Surbhi
  • 1 Klivans, Adam
  • 1 Linial, Nati
  • Show More...

  • Refine by Classification
  • 1 Theory of computation → Machine learning theory
  • 1 Theory of computation → Problems, reductions and completeness

  • Refine by Keyword
  • 1 Clustering
  • 1 Hardness in practice
  • 1 Learning Algorithm
  • 1 MAXCUT
  • 1 Non worst-case analysis
  • Show More...

  • Refine by Type
  • 2 document

  • Refine by Publication Year
  • 1 2013
  • 1 2021

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail