2 Search Results for "Anastacio, Marie"


Document
Short Paper
Frugal Algorithm Selection (Short Paper)

Authors: Erdem Kuş, Özgür Akgün, Nguyen Dang, and Ian Miguel

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
When solving decision and optimisation problems, many competing algorithms (model and solver choices) have complementary strengths. Typically, there is no single algorithm that works well for all instances of a problem. Automated algorithm selection has been shown to work very well for choosing a suitable algorithm for a given instance. However, the cost of training can be prohibitively large due to running candidate algorithms on a representative set of training instances. In this work, we explore reducing this cost by choosing a subset of the training instances on which to train. We approach this problem in three ways: using active learning to decide based on prediction uncertainty, augmenting the algorithm predictors with a timeout predictor, and collecting training data using a progressively increasing timeout. We evaluate combinations of these approaches on six datasets from ASLib and present the reduction in labelling cost achieved by each option.

Cite as

Erdem Kuş, Özgür Akgün, Nguyen Dang, and Ian Miguel. Frugal Algorithm Selection (Short Paper). In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 38:1-38:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kus_et_al:LIPIcs.CP.2024.38,
  author =	{Ku\c{s}, Erdem and Akg\"{u}n, \"{O}zg\"{u}r and Dang, Nguyen and Miguel, Ian},
  title =	{{Frugal Algorithm Selection}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{38:1--38:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.38},
  URN =		{urn:nbn:de:0030-drops-207239},
  doi =		{10.4230/LIPIcs.CP.2024.38},
  annote =	{Keywords: Algorithm Selection, Active Learning}
}
Document
Statistical Comparison of Algorithm Performance Through Instance Selection

Authors: Théo Matricon, Marie Anastacio, Nathanaël Fijalkow, Laurent Simon, and Holger H. Hoos

Published in: LIPIcs, Volume 210, 27th International Conference on Principles and Practice of Constraint Programming (CP 2021)


Abstract
Empirical performance evaluations, in competitions and scientific publications, play a major role in improving the state of the art in solving many automated reasoning problems, including SAT, CSP and Bayesian network structure learning (BNSL). To empirically demonstrate the merit of a new solver usually requires extensive experiments, with computational costs of CPU years. This not only makes it difficult for researchers with limited access to computational resources to test their ideas and publish their work, but also consumes large amounts of energy. We propose an approach for comparing the performance of two algorithms: by performing runs on carefully chosen instances, we obtain a probabilistic statement on which algorithm performs best, trading off between the computational cost of running algorithms and the confidence in the result. We describe a set of methods for this purpose and evaluate their efficacy on diverse datasets from SAT, CSP and BNSL. On all these datasets, most of our approaches were able to choose the correct algorithm with about 95% accuracy, while using less than a third of the CPU time required for a full comparison; the best methods reach this level of accuracy within less than 15% of the CPU time for a full comparison.

Cite as

Théo Matricon, Marie Anastacio, Nathanaël Fijalkow, Laurent Simon, and Holger H. Hoos. Statistical Comparison of Algorithm Performance Through Instance Selection. In 27th International Conference on Principles and Practice of Constraint Programming (CP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 210, pp. 43:1-43:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{matricon_et_al:LIPIcs.CP.2021.43,
  author =	{Matricon, Th\'{e}o and Anastacio, Marie and Fijalkow, Nathana\"{e}l and Simon, Laurent and Hoos, Holger H.},
  title =	{{Statistical Comparison of Algorithm Performance Through Instance Selection}},
  booktitle =	{27th International Conference on Principles and Practice of Constraint Programming (CP 2021)},
  pages =	{43:1--43:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-211-2},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{210},
  editor =	{Michel, Laurent D.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2021.43},
  URN =		{urn:nbn:de:0030-drops-153346},
  doi =		{10.4230/LIPIcs.CP.2021.43},
  annote =	{Keywords: Performance assessment, early stopping, automated reasoning solvers}
}
  • Refine by Author
  • 1 Akgün, Özgür
  • 1 Anastacio, Marie
  • 1 Dang, Nguyen
  • 1 Fijalkow, Nathanaël
  • 1 Hoos, Holger H.
  • Show More...

  • Refine by Classification
  • 2 Theory of computation → Constraint and logic programming
  • 1 General and reference → Evaluation
  • 1 Theory of computation → Active learning
  • 1 Theory of computation → Automated reasoning

  • Refine by Keyword
  • 1 Active Learning
  • 1 Algorithm Selection
  • 1 Performance assessment
  • 1 automated reasoning solvers
  • 1 early stopping

  • Refine by Type
  • 2 document

  • Refine by Publication Year
  • 1 2021
  • 1 2024

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail