On Quantitative Testing of Samplers

Authors Mate Soos, Priyanka Golia, Sourav Chakraborty, Kuldeep S. Meel



PDF
Thumbnail PDF

File

LIPIcs.CP.2022.36.pdf
  • Filesize: 0.81 MB
  • 16 pages

Document Identifiers

Author Details

Mate Soos
  • National University of Singapore, Singapore
Priyanka Golia
  • Indian Institute of Technology Kanpur, India
  • National University of Singapore, Singapore
Sourav Chakraborty
  • Indian Statistical Institute Kolkata, India
Kuldeep S. Meel
  • National University of Singapore, Singapore

Acknowledgements

We are grateful to the anonymous reviewers for constructive comments that significantly improved the final version of the paper. We are thankful to Yash Pote for his detailed feedback on the early drafts of the paper. The computational work for this article was performed on resources of the National Supercomputing Centre, Singapore: https://www.nscc.sg.

Cite AsGet BibTex

Mate Soos, Priyanka Golia, Sourav Chakraborty, and Kuldeep S. Meel. On Quantitative Testing of Samplers. In 28th International Conference on Principles and Practice of Constraint Programming (CP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 235, pp. 36:1-36:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)
https://doi.org/10.4230/LIPIcs.CP.2022.36

Abstract

The problem of uniform sampling is, given a formula F, sample solutions of F uniformly at random from the solution space of F. Uniform sampling is a fundamental problem with widespread applications, including configuration testing, bug synthesis, function synthesis, and many more. State-of-the-art approaches for uniform sampling have a trade-off between scalability and theoretical guarantees. Many state of the art uniform samplers do not provide any theoretical guarantees on the distribution of samples generated, however, empirically they have shown promising results. In such cases, the main challenge is to test whether the distribution according to which samples are generated is indeed uniform or not. Recently, Chakraborty and Meel (2019) designed the first scalable sampling tester, Barbarik, based on a grey-box sampling technique for testing if the distribution, according to which the given sampler is sampling, is close to the uniform or far from uniform. They were able to show that many off-the-self samplers are far from a uniform sampler. The availability of Barbarik increased the test-driven development of samplers. More recently, Golia, Soos, Chakraborty and Meel (2021), designed a uniform like sampler, CMSGen, which was shown to be accepted by Barbarik on all the instances. However, CMSGen does not provide any theoretical analysis of the sampling quality. CMSGen leads us to observe the need for a tester to provide a quantitative answer to determine the quality of underlying samplers instead of merely a qualitative answer of Accept or Reject. Towards this goal, we design a computational hardness-based tester ScalBarbarik that provides a more nuanced analysis of the quality of a sampler. ScalBarbarik allows more expressive measurement of the quality of the underlying samplers. We empirically show that the state-of-the-art sampler, CMSGen is not accepted as a uniform-like sampler by ScalBarbarik. Furthermore, we show that ScalBarbarik can be used to design a sampler that can achieve balance between scalability and uniformity.

Subject Classification

ACM Subject Classification
  • Theory of computation
  • Computing methodologies → Artificial intelligence
Keywords
  • SAT Sampling
  • Testing of Samplers
  • SAT Solvers

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Dimitris Achlioptas, Zayd S Hammoudeh, and Panos Theodoropoulos. Fast sampling of perfectly uniform satisfying assignments. In Proc. of SAT, 2018. Google Scholar
  2. Armin Biere and Andreas Fröhlich. Evaluating CDCL restart schemes. In Proc. of Pragmatics of SAT 2015, 2018. Google Scholar
  3. Sourav Chakraborty and Kuldeep S. Meel. On testing of uniform samplers. In Proc. of AAAI, 2019. Google Scholar
  4. Supratik Chakraborty, Dror Fried, Kuldeep S Meel, and Moshe Y Vardi. From weighted to unweighted model counting. In Proc. of AAAI, 2015. Google Scholar
  5. Supratik Chakraborty, Kuldeep S Meel, and Moshe Y Vardi. A scalable and nearly uniform generator of sat witnesses. In Proc. of CAV, 2013. Google Scholar
  6. Supratik Chakraborty, Kuldeep S Meel, and Moshe Y Vardi. Balancing scalability and uniformity in SAT witness generator. In Proc. of DAC, 2014. Google Scholar
  7. Lori A Clarke. A program testing system. In Proc. of ACM, 1976. Google Scholar
  8. Rafael Dutra, Kevin Laeufer, Jonathan Bachrach, and Koushik Sen. Efficient sampling of SAT solutions for testing. In Proc. of ICSE, 2018. Google Scholar
  9. Stefano Ermon, Carla P Gomes, Ashish Sabharwal, and Bart Selman. Uniform solution sampling using a constraint solver as an oracle. In Proc. of UAI, 2012. Google Scholar
  10. Vijay Ganesh, Charles W O’donnell, Mate Soos, Srinivas Devadas, Martin C Rinard, and Armando Solar-Lezama. Lynx: A programmatic SAT solver for the RNA-folding problem. In Proc. of SAT, 2012. Google Scholar
  11. Oded Goldreich. Foundations of cryptography: volume 2, basic applications. Cambridge university press, 2009. Google Scholar
  12. Priyanka Golia, Subhajit Roy, and Kuldeep S. Meel. Manthan: A data-driven approach for Boolean function synthesis. In Proc. of CAV, 2020. Google Scholar
  13. Priyanka Golia, Mate Soos, Sourav Chakraborty, and Kuldeep S. Meel. Designing samplers is easy: The boon of testers. In Proc. of FMCAD, 2021. Google Scholar
  14. Evgeny A. Grechnikov. Collisions for 72-step and 73-step SHA-1: improvements in the method of characteristics. Proc. of IACR, 2010. Google Scholar
  15. James C King. Symbolic execution and program testing. Comm. ACM, 1976. Google Scholar
  16. James H Kukula and Thomas R Shiple. Building circuits from relations. In Proc. of CAV, 2000. Google Scholar
  17. S. Kullback and R. A. Leibler. On information and sufficiency. Proc. of Ann. Math. Statist., 1951. Google Scholar
  18. Kuldeep S. Meel, Yash Pote, and Sourav Chakraborty. On testing of samplers. In Proc. of NeurIPS, 2020. Google Scholar
  19. Yehuda Naveh, Michal Rimon, Itai Jaeger, Yoav Katz, Michael Vinov, Eitan s Marcu, and Gil Shurek. Constraint-based random stimuli generation for hardware verification. Proc. of AI magazine, 2007. Google Scholar
  20. Vegard Nossum. SAT-based preimage attacks on SHA-1. Master’s thesis, University of Oslo, 2012. URL: https://www.duo.uio.no/handle/10852/34912.
  21. Subhajit Roy, Awanish Pandey, Brendan Dolan-Gavitt, and Yu Hu. Bug synthesis: Challenging bug-finding tools with deep faults. In Proc. of ESEC/FSE, 2018. Google Scholar
  22. Shubham Sharma, Rahul Gupta, Subhajit Roy, and Kuldeep S Meel. Knowledge compilation meets uniform sampling. In Proc. of LPAR, 2018. Google Scholar
  23. Mate Soos, Stephan Gocht, and Kuldeep S. Meel. Tinted, detached, and lazy CNF-XOR solving and its applications to counting and sampling. In Proc. of CAV, 2020. Google Scholar
  24. Mate Soos, Karsten Nohl, and Claude Castelluccia. Extending SAT solvers to cryptographic problems. In Proc. of SAT, 2009. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail