Frugal Algorithm Selection (Short Paper)

Authors Erdem Kuş , Özgür Akgün , Nguyen Dang , Ian Miguel



PDF
Thumbnail PDF

File

LIPIcs.CP.2024.38.pdf
  • Filesize: 1.04 MB
  • 16 pages

Document Identifiers

Author Details

Erdem Kuş
  • School of Computer Science, University of St Andrews, UK
Özgür Akgün
  • School of Computer Science, University of St Andrews, UK
Nguyen Dang
  • School of Computer Science, University of St Andrews, UK
Ian Miguel
  • School of Computer Science, University of St Andrews, UK

Cite AsGet BibTex

Erdem Kuş, Özgür Akgün, Nguyen Dang, and Ian Miguel. Frugal Algorithm Selection (Short Paper). In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 38:1-38:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)
https://doi.org/10.4230/LIPIcs.CP.2024.38

Abstract

When solving decision and optimisation problems, many competing algorithms (model and solver choices) have complementary strengths. Typically, there is no single algorithm that works well for all instances of a problem. Automated algorithm selection has been shown to work very well for choosing a suitable algorithm for a given instance. However, the cost of training can be prohibitively large due to running candidate algorithms on a representative set of training instances. In this work, we explore reducing this cost by choosing a subset of the training instances on which to train. We approach this problem in three ways: using active learning to decide based on prediction uncertainty, augmenting the algorithm predictors with a timeout predictor, and collecting training data using a progressively increasing timeout. We evaluate combinations of these approaches on six datasets from ASLib and present the reduction in labelling cost achieved by each option.

Subject Classification

ACM Subject Classification
  • Theory of computation → Active learning
  • Theory of computation → Constraint and logic programming
Keywords
  • Algorithm Selection
  • Active Learning

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Özgür Akgün, Nguyen Dang, Ian Miguel, András Z Salamon, Patrick Spracklen, and Christopher Stone. Discriminating instance generation from abstract specifications: A case study with cp and mip. In Integration of Constraint Programming, Artificial Intelligence, and Operations Research: 17th International Conference, CPAIOR 2020, Vienna, Austria, September 21-24, 2020, Proceedings 17, pages 41-51. Springer, 2020. Google Scholar
  2. Özgür Akgün, Alan M Frisch, Ian P Gent, Christopher Jefferson, Ian Miguel, and Peter Nightingale. Conjure: Automatic generation of constraint models from problem specifications. Artificial Intelligence, 310:103751, 2022. Google Scholar
  3. Roberto Amadini, Maurizio Gabbrielli, and Jacopo Mauro. Sunny: a lazy portfolio approach for constraint solving. Theory and Practice of Logic Programming, 14(4-5):509-524, 2014. Google Scholar
  4. Roberto Amadini, Maurizio Gabbrielli, and Jacopo Mauro. Sunny-cp: a sequential cp portfolio solver. In Proceedings of the 30th Annual ACM Symposium on Applied Computing, pages 1861-1867, 2015. Google Scholar
  5. Carlos Ansótegui, Joel Gabas, Yuri Malitsky, and Meinolf Sellmann. Maxsat by improved instance-specific algorithm configuration. Artificial Intelligence, 235:26-39, 2016. Google Scholar
  6. Bernd Bischl, Pascal Kerschke, Lars Kotthoff, Marius Lindauer, Yuri Malitsky, Alexandre Fréchette, Holger Hoos, Frank Hutter, Kevin Leyton-Brown, Kevin Tierney, and Joaquin Vanschoren. Aslib: A benchmark library for algorithm selection. Artificial Intelligence, 237:41-58, 2016. URL: https://doi.org/10.1016/j.artint.2016.04.003.
  7. Gjorgjina Cenikj, Ryan Dieter Lang, Andries Petrus Engelbrecht, Carola Doerr, Peter Korošec, and Tome Eftimov. Selector: selecting a representative benchmark suite for reproducible statistical comparison. In Proceedings of The Genetic and Evolutionary Computation Conference, pages 620-629, 2022. Google Scholar
  8. David Cohn. Active Learning, pages 10-14. Springer US, Boston, MA, 2010. URL: https://doi.org/10.1007/978-0-387-30164-8_6.
  9. Marco Collautti, Yuri Malitsky, Deepak Mehta, and Barry O’Sullivan. Snnap: Solver-based nearest neighbor for algorithm portfolios. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2013, Prague, Czech Republic, September 23-27, 2013, Proceedings, Part III 13, pages 435-450. Springer, 2013. Google Scholar
  10. Nguyen Viet Cuong, Wee Sun Lee, Nan Ye, Kian Ming A. Chai, and Hai Leong Chieu. Active learning for probabilistic hypotheses using the maximum gibbs error criterion. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 1, NIPS'13, pages 1457-1465, Red Hook, NY, USA, 2013. Curran Associates Inc. Google Scholar
  11. Nguyen Dang, Özgür Akgün, Joan Espasa, Ian Miguel, and Peter Nightingale. A framework for generating informative benchmark instances. arXiv preprint arXiv:2205.14753, 2022. Google Scholar
  12. Tivadar Danka and Péter Horváth. modal: A modular active learning framework for python. CoRR, abs/1805.00979, 2018. URL: https://arxiv.org/abs/1805.00979.
  13. Ewan Davidson, Özgür Akgün, Joan Espasa, and Peter Nightingale. Effective encodings of constraint programming models to smt. In Principles and Practice of Constraint Programming: 26th International Conference, CP 2020, Louvain-la-Neuve, Belgium, September 7-11, 2020, Proceedings 26, pages 143-159. Springer, 2020. Google Scholar
  14. Tobias Fuchs, Jakob Bach, and Markus Iser. Active learning for sat solver benchmarking. In Tools and Algorithms for the Construction and Analysis of Systems: 29th International Conference, TACAS 2023, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022, Paris, France, April 22-27, 2023, Proceedings, Part I, pages 407-425. Springer, 2023. Google Scholar
  15. François Gonard, Marc Schoenauer, and Michèle Sebag. Algorithm selector and prescheduler in the icon challenge. Bioinspired heuristics for optimization, pages 203-219, 2019. Google Scholar
  16. Jonas Hanselle, Alexander Tornede, Marcel Wever, and Eyke Hüllermeier. Hybrid ranking and regression for algorithm selection. In German Conference on Artificial Intelligence (Künstliche Intelligenz), pages 59-72. Springer, 2020. Google Scholar
  17. Holger Hoos, Marius Lindauer, and Torsten Schaub. claspfolio 2: Advances in algorithm selection for answer set programming. Theory and Practice of Logic Programming, 14(4-5):569-585, 2014. Google Scholar
  18. Holger H Hoos, Benjamin Kaufmann, Torsten Schaub, and Marius Schneider. Robust benchmark set selection for boolean constraint solvers. In Learning and Intelligent Optimization: 7th International Conference, LION 7, Catania, Italy, January 7-11, 2013, Revised Selected Papers 7, pages 138-152. Springer, 2013. Google Scholar
  19. Sheng-Jun Huang, Rong Jin, and Zhi-Hua Zhou. Active learning by querying informative and representative examples. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(10):1936-1949, 2014. URL: https://doi.org/10.1109/TPAMI.2014.2307881.
  20. Serdar Kadioglu, Yuri Malitsky, Meinolf Sellmann, and Kevin Tierney. Isac-instance-specific algorithm configuration. In ECAI 2010, pages 751-756. IOS Press, 2010. Google Scholar
  21. Pascal Kerschke, Holger H Hoos, Frank Neumann, and Heike Trautmann. Automated algorithm selection: Survey and perspectives. Evolutionary computation, 27(1):3-45, 2019. Google Scholar
  22. Pascal Kerschke, Lars Kotthoff, Jakob Bossek, Holger H Hoos, and Heike Trautmann. Leveraging tsp solver complementarity through machine learning. Evolutionary computation, 26(4):597-620, 2018. Google Scholar
  23. Lars Kotthoff. Algorithm selection for combinatorial search problems: A survey. Data mining and constraint programming: Foundations of a cross-disciplinary approach, pages 149-190, 2016. Google Scholar
  24. Erdem Kuş. stacs-cp/CP2024-Frugal. Other, version 1.1. (visited on 2024-08-20). URL: https://doi.org/10.5281/zenodo.13294528.
  25. Marius Lindauer, Holger H Hoos, Frank Hutter, and Torsten Schaub. Autofolio: An automatically configured algorithm selector. Journal of Artificial Intelligence Research, 53:745-778, 2015. Google Scholar
  26. Tong Liu, Roberto Amadini, Maurizio Gabbrielli, and Jacopo Mauro. sunny-as2: Enhancing sunny for algorithm selection. Journal of Artificial Intelligence Research, 72:329-376, 2021. Google Scholar
  27. Norbert Manthey and Sibylle Möhle. Better evaluations by analyzing benchmark structure. Proc. PoS, 2016. Google Scholar
  28. Théo Matricon, Marie Anastacio, Nathanaël Fijalkow, Laurent Simon, and Holger H Hoos. Statistical comparison of algorithm performance through instance selection. In 27th International Conference on Principles and Practice of Constraint Programming (CP 2021). Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2021. Google Scholar
  29. Mustafa Mısır. Benchmark set reduction for cheap empirical algorithmic studies. In 2021 IEEE Congress on Evolutionary Computation (CEC), pages 871-877. IEEE, 2021. Google Scholar
  30. Nicholas Nethercote, Peter J Stuckey, Ralph Becket, Sebastian Brand, Gregory J Duck, and Guido Tack. Minizinc: Towards a standard cp modelling language. In International Conference on Principles and Practice of Constraint Programming, pages 529-543. Springer, 2007. Google Scholar
  31. Peter Nightingale, Özgür Akgün, Ian P Gent, Christopher Jefferson, Ian Miguel, and Patrick Spracklen. Automatically improving constraint models in savile row. Artificial Intelligence, 251:35-61, 2017. Google Scholar
  32. Eoin O’Mahony, Emmanuel Hebrard, Alan Holland, Conor Nugent, and Barry O’Sullivan. Using case-based reasoning in an algorithm portfolio for constraint solving. In Irish conference on artificial intelligence and cognitive science, pages 210-216, 2008. Google Scholar
  33. Mijung Park and Jonathan W. Pillow. Bayesian active learning with localized priors for fast receptive field characterization. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 2, NIPS'12, pages 2348-2356, Red Hook, NY, USA, 2012. Curran Associates Inc. Google Scholar
  34. João Luiz Junho Pereira, Kate Smith-Miles, Mario Andrés Muñoz, and Ana Carolina Lorena. Optimal selection of benchmarking datasets for unbiased machine learning algorithm evaluation. Data Mining and Knowledge Discovery, 38(2):461-500, 2024. Google Scholar
  35. John R Rice. The algorithm selection problem. In Advances in computers, volume 15, pages 65-118. Elsevier, 1976. Google Scholar
  36. Mattia Rizzini, Chris Fawcett, Mauro Vallati, Alfonso E Gerevini, and Holger H Hoos. Portfolio methods for optimal planning: an empirical analysis. In 2015 IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI), pages 494-501. IEEE, 2015. Google Scholar
  37. Mattia Rizzini, Chris Fawcett, Mauro Vallati, Alfonso E Gerevini, and Holger H Hoos. Static and dynamic portfolio methods for optimal planning: An empirical analysis. International Journal on Artificial Intelligence Tools, 26(01):1760006, 2017. Google Scholar
  38. Burr Settles. Active learning literature survey, 2009. URL: https://api.semanticscholar.org/CorpusID:324600.
  39. Patrick Spracklen, Nguyen Dang, Özgür Akgün, and Ian Miguel. Automated streamliner portfolios for constraint satisfaction problems. Artificial Intelligence, 319:103915, 2023. Google Scholar
  40. Katrin Tomanek and Udo Hahn. A comparison of models for cost-sensitive active learning. In Coling 2010: Posters, pages 1247-1255, 2010. Google Scholar
  41. Yu-Lin Tsou and Hsuan-Tien Lin. Annotation cost-sensitive active learning by tree sampling. Machine Learning, 108(5):785-807, 2019. Google Scholar
  42. Mauro Vallati, Lukáš Chrpa, and Diane Kitchin. Asap: an automatic algorithm selection approach for planning. International Journal on Artificial Intelligence Tools, 23(06):1460032, 2014. Google Scholar
  43. Riccardo Volpato and Guangyan Song. Active learning to optimise time-expensive algorithm selection. arXiv preprint arXiv:1909.03261, 2019. Google Scholar
  44. Liantao Wang, Xuelei Hu, Bo Yuan, and Jianfeng Lu. Active learning via query synthesis and nearest neighbour search. Neurocomputing, 147:426-434, 2015. Advances in Self-Organizing Maps Subtitle of the special issue: Selected Papers from the Workshop on Self-Organizing Maps 2012 (WSOM 2012). URL: https://doi.org/10.1016/j.neucom.2014.06.042.
  45. Lin Xu, Frank Hutter, Holger H Hoos, and Kevin Leyton-Brown. Satzilla: portfolio-based algorithm selection for sat. Journal of artificial intelligence research, 32:565-606, 2008. Google Scholar
  46. Lin Xu, Frank Hutter, Jonathan Shen, Holger H Hoos, and Kevin Leyton-Brown. Satzilla2012: Improved algorithm selection based on cost-sensitive classification models. Proceedings of SAT Challenge, pages 57-58, 2012. Google Scholar
  47. Jingbo Zhu, Huizhen Wang, Benjamin K. Tsou, and Matthew Ma. Active learning with sampling by uncertainty and density for data annotations. IEEE Transactions on Audio, Speech, and Language Processing, 18(6):1323-1331, 2010. URL: https://doi.org/10.1109/TASL.2009.2033421.