3 Search Results for "Prasad, Siddharth"


Document
Improved Sample Complexity Bounds for Branch-And-Cut

Authors: Maria-Florina Balcan, Siddharth Prasad, Tuomas Sandholm, and Ellen Vitercik

Published in: LIPIcs, Volume 235, 28th International Conference on Principles and Practice of Constraint Programming (CP 2022)


Abstract
The branch-and-cut algorithm for integer programming has a wide variety of tunable parameters that have a huge impact on its performance, but which are challenging to tune by hand. An increasingly popular approach is to use machine learning to configure these parameters based on a training set of integer programs from the application domain. We bound how large the training set should be to ensure that for any configuration, its average performance over the training set is close to its expected future performance. Our guarantees apply to parameters that control the most important aspects of branch-and-cut: node selection, branching constraint selection, and cut selection, and are sharper and more general than those from prior research.

Cite as

Maria-Florina Balcan, Siddharth Prasad, Tuomas Sandholm, and Ellen Vitercik. Improved Sample Complexity Bounds for Branch-And-Cut. In 28th International Conference on Principles and Practice of Constraint Programming (CP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 235, pp. 3:1-3:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{balcan_et_al:LIPIcs.CP.2022.3,
  author =	{Balcan, Maria-Florina and Prasad, Siddharth and Sandholm, Tuomas and Vitercik, Ellen},
  title =	{{Improved Sample Complexity Bounds for Branch-And-Cut}},
  booktitle =	{28th International Conference on Principles and Practice of Constraint Programming (CP 2022)},
  pages =	{3:1--3:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-240-2},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{235},
  editor =	{Solnon, Christine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2022.3},
  URN =		{urn:nbn:de:0030-drops-166321},
  doi =		{10.4230/LIPIcs.CP.2022.3},
  annote =	{Keywords: Automated algorithm configuration, integer programming, machine learning theory, tree search, branch-and-bound, branch-and-cut, cutting planes, sample complexity, generalization guarantees, data-driven algorithm design}
}
Document
Incentive Compatible Active Learning

Authors: Federico Echenique and Siddharth Prasad

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
We consider active learning under incentive compatibility constraints. The main application of our results is to economic experiments, in which a learner seeks to infer the parameters of a subject’s preferences: for example their attitudes towards risk, or their beliefs over uncertain events. By cleverly adapting the experimental design, one can save on the time spent by subjects in the laboratory, or maximize the information obtained from each subject in a given laboratory session; but the resulting adaptive design raises complications due to incentive compatibility. A subject in the lab may answer questions strategically, and not truthfully, so as to steer subsequent questions in a profitable direction. We analyze two standard economic problems: inference of preferences over risk from multiple price lists, and belief elicitation in experiments on choice over uncertainty. In the first setting, we tune a simple and fast learning algorithm to retain certain incentive compatibility properties. In the second setting, we provide an incentive compatible learning algorithm based on scoring rules with query complexity that differs from obvious methods of achieving fast learning rates only by subpolynomial factors. Thus, for these areas of application, incentive compatibility may be achieved without paying a large sample complexity price.

Cite as

Federico Echenique and Siddharth Prasad. Incentive Compatible Active Learning. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 67:1-67:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{echenique_et_al:LIPIcs.ITCS.2020.67,
  author =	{Echenique, Federico and Prasad, Siddharth},
  title =	{{Incentive Compatible Active Learning}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{67:1--67:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.67},
  URN =		{urn:nbn:de:0030-drops-117525},
  doi =		{10.4230/LIPIcs.ITCS.2020.67},
  annote =	{Keywords: Active Learning, Incentive Compatibility, Preference Elicitation}
}
Document
Learning Time Dependent Choice

Authors: Zachary Chase and Siddharth Prasad

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
We explore questions dealing with the learnability of models of choice over time. We present a large class of preference models defined by a structural criterion for which we are able to obtain an exponential improvement over previously known learning bounds for more general preference models. This in particular implies that the three most important discounted utility models of intertemporal choice - exponential, hyperbolic, and quasi-hyperbolic discounting - are learnable in the PAC setting with VC dimension that grows logarithmically in the number of time periods. We also examine these models in the framework of active learning. We find that the commonly studied stream-based setting is in general difficult to analyze for preference models, but we provide a redeeming situation in which the learner can indeed improve upon the guarantees provided by PAC learning. In contrast to the stream-based setting, we show that if the learner is given full power over the data he learns from - in the form of learning via membership queries - even very naive algorithms significantly outperform the guarantees provided by higher level active learning algorithms.

Cite as

Zachary Chase and Siddharth Prasad. Learning Time Dependent Choice. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 62:1-62:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chase_et_al:LIPIcs.ITCS.2019.62,
  author =	{Chase, Zachary and Prasad, Siddharth},
  title =	{{Learning Time Dependent Choice}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{62:1--62:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.62},
  URN =		{urn:nbn:de:0030-drops-101550},
  doi =		{10.4230/LIPIcs.ITCS.2019.62},
  annote =	{Keywords: Intertemporal Choice, Discounted Utility, Preference Recovery, PAC Learning, Active Learning}
}
  • Refine by Author
  • 3 Prasad, Siddharth
  • 1 Balcan, Maria-Florina
  • 1 Chase, Zachary
  • 1 Echenique, Federico
  • 1 Sandholm, Tuomas
  • Show More...

  • Refine by Classification
  • 2 Theory of computation → Models of learning
  • 1 Theory of computation → Integer programming
  • 1 Theory of computation → Sample complexity and generalization bounds

  • Refine by Keyword
  • 2 Active Learning
  • 1 Automated algorithm configuration
  • 1 Discounted Utility
  • 1 Incentive Compatibility
  • 1 Intertemporal Choice
  • Show More...

  • Refine by Type
  • 3 document

  • Refine by Publication Year
  • 1 2019
  • 1 2020
  • 1 2022

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail