On the Performance of Deep Generative Models of Realistic SAT Instances

Authors Iván Garzón, Pablo Mesejo, Jesús Giráldez-Cru



PDF
Thumbnail PDF

File

LIPIcs.SAT.2022.3.pdf
  • Filesize: 2.73 MB
  • 19 pages

Document Identifiers

Author Details

Iván Garzón
  • LSI, DaSCI, University of Granada, Spain
Pablo Mesejo
  • DECSAI, DaSCI, University of Granada, Spain
Jesús Giráldez-Cru
  • DECSAI, DaSCI, University of Granada, Spain

Cite AsGet BibTex

Iván Garzón, Pablo Mesejo, and Jesús Giráldez-Cru. On the Performance of Deep Generative Models of Realistic SAT Instances. In 25th International Conference on Theory and Applications of Satisfiability Testing (SAT 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 236, pp. 3:1-3:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)
https://doi.org/10.4230/LIPIcs.SAT.2022.3

Abstract

Generating realistic random SAT instances - random SAT formulas with computational characteristics similar to the ones of application SAT benchmarks - is a challenging problem in order to understand the success of modern SAT solvers solving this kind of problems. Traditional approaches are based on probabilistic models, where a probability distribution characterizes the occurrences of variables into clauses in order to mimic a certain feature exhibited in most application formulas (e.g., community structure), but they may be unable to reproduce others. Alternatively, deep generative models have been recently proposed to address this problem. The goal of these models is to learn the whole structure of the formula without focusing on any predefined feature, in order to reproduce all its computational characteristics at once. In this work, we propose two new deep generative models of realistic SAT instances, and carry out an exhaustive experimental evaluation of these and other existing models in order to analyze their performances. Our results show that models based on graph convolutional networks, possibly enhanced with edge features, return the best results in terms of structural properties and SAT solver performance.

Subject Classification

ACM Subject Classification
  • Computing methodologies → Artificial intelligence
Keywords
  • Realistic SAT generators
  • pseudo-industrial random SAT
  • deep generative models
  • deep learning

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Michael Alekhnovich and Alexander A. Razborov. Satisfiability, branch-width and Tseitin tautologies. Computational Complexity, 20(4):649-678, 2011. Google Scholar
  2. Carlos Ansótegui, Maria Luisa Bonet, Jesús Giráldez-Cru, Jordi Levy, and Laurent Simon. Community structure in industrial SAT instances. Journal of Artificial Intelligence Research, 66:443-472, 2019. Google Scholar
  3. Carlos Ansótegui, Maria Luisa Bonet, and Jordi Levy. On the structure of industrial SAT instances. In Proc. of the 15th International Conference on Principles and Practice of Constraint Programming (CP 2009), pages 127-141, 2009. Google Scholar
  4. Carlos Ansótegui, Maria Luisa Bonet, and Jordi Levy. Towards industrial-like random SAT instances. In Proc. of the 21st International Joint Conference on Artificial Intelligence (IJCAI 2009), pages 387-392, 2009. Google Scholar
  5. Gilles Audemard and Laurent Simon. Predicting learnt clauses quality in modern SAT solvers. In Proc. of the 21st International Joint Conference on Artificial Intelligence (IJCAI 2009), pages 399-404, 2009. Google Scholar
  6. Armin Biere. CaDiCaL, Lingeling, Plingeling, Treengeling, YalSAT Entering the SAT Competition 2017. In Proc. of SAT Competition 2017 : Solver and Benchmark Descriptions, pages 14-15. Department of Computer Science Series of Publications B, University of Helsinki, 2017. Google Scholar
  7. Armin Biere, Katalin Fazekas, Mathias Fleury, and Maximillian Heisinger. CaDiCaL, Kissat, Paracooba, Plingeling and Treengeling entering the SAT Competition 2020. In Proc. of SAT Competition 2020 : Solver and Benchmark Descriptions, pages 51-53. Department of Computer Science Report Series B, University of Helsinki, 2020. Google Scholar
  8. Vasek Chvátal and Bruce A. Reed. Mick gets some (the odds are on his side). In Proc. of the 33rd Annual Symposium on Foundations of Computer Science (FOCS 1992), pages 620-627, 1992. Google Scholar
  9. Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Proc. of the Annual Conference on Neural Information Processing Systems (NIPS 2016), pages 3837-3845, 2016. Google Scholar
  10. David K. Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alán Aspuru-Guzik, and Ryan P. Adams. Convolutional networks on graphs for learning molecular fingerprints. In Proc. of the Annual Conference on Neural Information Processing Systems (NIPS 2015), volume 28, pages 224-2232, 2015. Google Scholar
  11. Fernando Gama, Elvin Isufi, Geert Leus, and Alejandro Ribeiro. Graphs, convolutions, and neural networks: From graph filters to graph neural networks. IEEE Signal Processing Magazine, 37(6):128-138, 2020. Google Scholar
  12. Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In Proc. of the 34th International Conference on Machine Learning (ICML 2017), pages 1263-1272, 2017. Google Scholar
  13. Jesús Giráldez-Cru and Jordi Levy. Generating SAT instances with community structure. Artificial Intelligence, 238:119-134, 2016. Google Scholar
  14. Jesús Giráldez-Cru and Jordi Levy. Popularity-similarity random SAT formulas. Artificial Intelligence, 299:103537, 2021. Google Scholar
  15. Liyu Gong and Qiang Cheng. Exploiting edge features for graph neural networks. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2019), pages 9211-9219, 2019. Google Scholar
  16. William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Proc. of the Annual Conference on Neural Information Processing Systems (NIPS 2017), pages 1025-1035, 2017. Google Scholar
  17. Frank Hutter, Lin Xu, Holger H. Hoos, and Kevin Leyton-Brown. Algorithm runtime prediction: Methods & evaluation. Artificial Intelligence, 206:79-111, 2014. Google Scholar
  18. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In Proc. of the 5th International Conference on Learning Representations (ICLR 2017), 2017. Google Scholar
  19. Chunxiao Li, Jonathan Chung, Soham Mukherjee, Marc Vinyals, Noah Fleming, Antonina Kolokolova, Alice Mu, and Vijay Ganesh. On the hierarchical community structure of practical boolean formulas. In Proc. of the 24th International Conference on Theory and Applications of Satisfiability Testing (SAT 2021), pages 359-376, 2021. Google Scholar
  20. Jia Hui Liang. Machine Learning for SAT Solvers. PhD thesis, University of Waterloo, Ontario, Canada, 2018. Google Scholar
  21. Mao Luo, Chu-Min Li, Fan Xiao, Felip Manyà, and Zhipeng Lü. An effective learnt clause minimization approach for CDCL SAT solvers. In Proc. of the 26th International Joint Conference on Artificial Intelligence (IJCAI 2017), pages 703-711, 2017. Google Scholar
  22. Yuri Malitsky. Instance-specific algorithm configuration. Constraints, 20(4):474, 2015. Google Scholar
  23. Yuri Malitsky, Ashish Sabharwal, Horst Samulowitz, and Meinolf Sellmann. Non-model-based algorithm portfolios for SAT. In Proc. of the 14th International Conference on Theory and Applications of Satisfiability Testing (SAT 2011), pages 369-370, 2011. Google Scholar
  24. David G. Mitchell and Hector J. Levesque. Some pitfalls for experimenters with random SAT. Artificial Intelligence, 81(1-2):111-125, 1996. Google Scholar
  25. David G. Mitchell, Bart Selman, and Hector J. Levesque. Hard and easy distributions of SAT problems. In Proc. of the 10th National Conference on Artificial Intelligence (AAAI 1992), pages 459-465, 1992. Google Scholar
  26. Mark E.J. Newman and Michelle Girvan. Finding and evaluating community structure in networks. Physical Review E, 69(2):026113, 2004. Google Scholar
  27. Daniel Selsam and Nikolaj Bjørner. Guiding high-performance SAT solvers with unsat-core predictions. In Proc. of the 22nd International Conference on Theory and Applications of Satisfiability Testing (SAT 2019), pages 336-353, 2019. Google Scholar
  28. Daniel Selsam, Matthew Lamm, Benedikt Bünz, Percy Liang, Leonardo de Moura, and David L. Dill. Learning a SAT solver from single-bit supervision. In Proc. of the 7th International Conference on Learning Representations (ICLR 2019), 2019. Google Scholar
  29. João P. Marques Silva, Inês Lynce, and Sharad Malik. Conflict-driven clause learning SAT solvers. In Armin Biere, Marijn Heule, Hans van Maaren, and Toby Walsh, editors, Handbook of Satisfiability, pages 131-153. IOS Press, 2009. Google Scholar
  30. Martin Simonovsky and Nikos Komodakis. Dynamic edge-conditioned filters in convolutional neural networks on graphs. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), pages 29-38, 2017. Google Scholar
  31. Ryan Williams, Carla P. Gomes, and Bart Selman. Backdoors to typical case complexity. In Proc. of the 18th International Joint Conference on Artificial Intelligence (IJCAI 2003), pages 1173-1178, 2003. Google Scholar
  32. Haoze Wu and Raghuram Ramanujan. Learning to generate industrial SAT instances. In Proc. of the 12th International Symposium on Combinatorial Search (SOCS 2019), pages 206-207, 2019. Google Scholar
  33. Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems, 32(1):4-24, 2020. Google Scholar
  34. Lin Xu, Holger H. Hoos, and Kevin Leyton-Brown. Predicting satisfiability at the phase transition. In Proc. of 26th National Conference on Artificial Intelligence (AAAI 2012), 2012. Google Scholar
  35. Lin Xu, Frank Hutter, Holger H. Hoos, and Keving Leyton-Brown. Satzilla: Portfolio-based algorithm selection for SAT. Journal of Artificial Intelligence Research, 32:565-606, 2008. Google Scholar
  36. Jiaxuan You, Haoze Wu, Clark W. Barrett, Raghuram Ramanujan, and Jure Leskovec. G2SAT: learning to generate SAT formulas. In Proc. of the Annual Conference on Neural Information Processing Systems (NeurIPS 2019), pages 10552-10563, 2019. Google Scholar
  37. Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and applications. AI Open, 1:57-81, 2020. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail