Revisiting SATZilla Features in 2024

Authors Hadar Shavit , Holger H. Hoos



PDF
Thumbnail PDF

File

LIPIcs.SAT.2024.27.pdf
  • Filesize: 1.78 MB
  • 26 pages

Document Identifiers

Author Details

Hadar Shavit
  • Chair for AI Methodology, RWTH Aachen University, Germany
Holger H. Hoos
  • Chair for AI Methodology, RWTH Aachen University, Germany
  • Leiden Institute of Advanced Computer Science, Leiden University, The Netherlands
  • Department of Computer Science, University of British Columbia, Vancouver, Canada

Acknowledgements

We would like to thank Matthias König and Anja Jankovic for the discussions. We would also like to thank the reviewers who gave a valuable feedback.

Cite AsGet BibTex

Hadar Shavit and Holger H. Hoos. Revisiting SATZilla Features in 2024. In 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 305, pp. 27:1-27:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)
https://doi.org/10.4230/LIPIcs.SAT.2024.27

Abstract

Boolean satisfiability (SAT) is an NP-complete problem with important applications, notably in hardware and software verification. Characterising a SAT instance by a set of features has shown great potential for various tasks, ranging from algorithm selection to benchmark generation. In this work, we revisit the widely used SATZilla features and introduce a new version of the tool used to compute them. In particular, we utilise a new preprocessor and SAT solvers, adjust the code to accommodate larger formulas, and determine better settings of the feature extraction time limits. We evaluate the extracted features on three downstream tasks: satisfiability prediction, running time prediction, and algorithm selection. We observe that our new tool is able to extract features from a broader range of instances than before. We show that the new version of the feature extractor produces features that achieve up to 26% lower RMSE for running time prediction, up to 3% higher accuracy for satisfiability prediction, and up to 15 times higher closed gap for algorithm selection on benchmarks from recent SAT competitions.

Subject Classification

ACM Subject Classification
  • Theory of computation → Logic and verification
Keywords
  • Satisfiability
  • feature extraction
  • running time prediction
  • satisfiability prediction

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Armin Biere, Katalin Fazekas, Mathias Fleury, and Maximillian Heisinger. CaDiCaL, Kissat, Paracooba, Plingeling and Treengeling entering the SAT Competition 2020. In Proceedings of SAT Competition 2020 - Solver and Benchmark Descriptions, pages 51-53, 2020. URL: https://researchportal.helsinki.fi/en/publications/proceedings-of-sat-competition-2020-solver-and-benchmark-descript.
  2. David Devlin and Barry O’Sullivan. Satisfiability as a classification problem. In Proceedings of the 19th Irish Conference on Artificial Intelligence and Cognitive Science, pages 1-10, 2008. URL: http://www.cs.ucc.ie/~osullb/pubs/classification#:~:text=Sat%20can%20be%20seen%20as,the%20problem%20of%20deciding%20Sat.
  3. Jan Elffers, Jesús Giráldez-Cru, Stephan Gocht, Jakob Nordström, and Laurent Simon. Seeking practical CDCL insights from theoretical SAT benchmarks. In Jérôme Lang, editor, Proceedings of the 37th International Joint Conference on Artificial Intelligence (IJCAI), pages 1300-1308, 2018. URL: https://doi.org/10.24963/ijcai.2018/181.
  4. Wenxuan Guo, Hui-Ling Zhen, Xijun Li, Wanqian Luo, Mingxuan Yuan, Yaohui Jin, and Junchi Yan. Machine learning methods in solving the boolean satisfiability problem. Machine Intelligence Research, 20(5):640-655, 2023. URL: https://doi.org/10.1007/s11633-022-1396-2.
  5. Eric I. Hsu and Sheila A. McIlraith. VARSAT: integrating novel probabilistic inference techniques with DPLL search. In Oliver Kullmann, editor, Proceedings of the 12th International Conference on Theory and Applications of Satisfiability Testing (SAT), pages 377-390, 2009. URL: https://doi.org/10.1007/978-3-642-02777-2_35.
  6. Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In Proceedings of the 5th International Conference on Learning and Intelligent Optimization, Lecture Notes in Computer Science, pages 507-523, 2011. URL: https://doi.org/10.1007/978-3-642-25566-3_40.
  7. Frank Hutter, Lin Xu, Holger H. Hoos, and Kevin Leyton-Brown. Algorithm runtime prediction: Methods & evaluation. Artificial Intelligence, 206:79-111, 2014. URL: https://doi.org/10.1016/j.artint.2013.10.003.
  8. Kevin Leyton-Brown, Eugene Nudelman, and Yoav Shoham. Empirical hardness models: Methodology and a case study on combinatorial auctions. Journal of the ACM, 56(4):22:1-22:52, 2009. URL: https://doi.org/10.1145/1538902.1538906.
  9. Marius Lindauer, Katharina Eggensperger, Matthias Feurer, André Biedenkapp, Difan Deng, Carolin Benjamins, Tim Ruhkopf, René Sass, and Frank Hutter. SMAC3: A versatile bayesian optimization package for hyperparameter optimization. Journal of Machine Learning Research, 23:54:1-54:9, 2022. URL: http://jmlr.org/papers/v23/21-0888.html.
  10. Marius Lindauer, Holger H. Hoos, Frank Hutter, and Torsten Schaub. Autofolio: An automatically configured algorithm selector. Journal of Artificial Intelligence Research, 53:745-778, 2015. URL: https://doi.org/10.1613/jair.4726.
  11. Lionel Lobjois and Michel Lemaître. Branch and bound algorithm selection by performance prediction. In Proceedings of the 15th National Conference on Artificial Intelligence (AAAI), pages 353-358, 1998. URL: http://www.aaai.org/Library/AAAI/1998/aaai98-050.php.
  12. Yogesh S. Mahajan, Zhaohui Fu, and Sharad Malik. Zchaff2004: An efficient SAT solver. In Proceedings of the 7th International Conference on Theory and Applications of Satisfiability Testing, volume 3542, pages 360-375, 2004. URL: https://doi.org/10.1007/11527695_27.
  13. Eugene Nudelman, Kevin Leyton-Brown, Holger H. Hoos, Alex Devkar, and Yoav Shoham. Understanding random sat: Beyond the clauses-to-variables ratio. In Proceedings of the 10TH International Conference on Principles and Practice of Constraint Programming, pages 438-452, 2004. URL: https://doi.org/10.1007/978-3-540-30201-8_33.
  14. Jeroen Rook, Anna Latour, Holger Hoos, and Siegfried Nijssen. Caching in model counters: A journey through space and time. Proceedings of the Workshop on Counting and Sampling, 2021. URL: https://ada.liacs.nl/papers/RooEtAl21.pdf.
  15. Olivier Roussel. Controlling a solver execution with the runsolver tool. Journal Satisfiability Boolean Modelling Computation, 7(4):139-144, 2011. URL: https://doi.org/10.3233/sat190083.
  16. Hadar Shavit and Holger Hoos. hadarshavit/revisiting_satzilla. Software, swhId: https://archive.softwareheritage.org/swh:1:dir:a06a5e5ad5473cd07b6a423a1dc5491b54af7c61;origin=https://github.com/hadarshavit/revisiting_satzilla;visit=swh:1:snp:97d03ea4cde40e991fa840dcd6b701a4c3e83161;anchor=swh:1:rev:a73dcf0bacd95d6323d45452f9cc99239974f394 (visited on 2024-07-04). URL: https://github.com/hadarshavit/revisiting_satzilla.
  17. Lin Xu, Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. Satzilla: Portfolio-based algorithm selection for SAT. Journal of Artificial Intelligence Research, 32:565-606, 2008. URL: https://doi.org/10.1613/jair.2490.
  18. Lin Xu, Frank Hutter, Jonathan Shen, Holger H Hoos, and Kevin Leyton-Brown. Satzilla2012: Improved algorithm selection based on cost-sensitive classification models. Proceedings of SAT Challenge 2012, pages 57-58, 2012. URL: https://www.cs.ubc.ca/~kevinlb/papers/2012-SATzilla2012-Solver-Description.pdf.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail