Taming Differentiable Logics with Coq Formalisation

Authors Reynald Affeldt , Alessandro Bruni , Ekaterina Komendantskaya , Natalia Ślusarz , Kathrin Stark



PDF
Thumbnail PDF

File

LIPIcs.ITP.2024.4.pdf
  • Filesize: 0.98 MB
  • 19 pages

Document Identifiers

Author Details

Reynald Affeldt
  • National Institute of Advanced Industrial Science and Technology (AIST), Tokyo, Japan
Alessandro Bruni
  • IT-University of Copenhagen, Denmark
Ekaterina Komendantskaya
  • Southampton University, UK
  • Heriot-Watt University, Edinburgh, UK
Natalia Ślusarz
  • Heriot-Watt University, Edinburgh, UK
Kathrin Stark
  • Heriot-Watt University, Edinburgh, UK

Cite As Get BibTex

Reynald Affeldt, Alessandro Bruni, Ekaterina Komendantskaya, Natalia Ślusarz, and Kathrin Stark. Taming Differentiable Logics with Coq Formalisation. In 15th International Conference on Interactive Theorem Proving (ITP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 309, pp. 4:1-4:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024) https://doi.org/10.4230/LIPIcs.ITP.2024.4

Abstract

For performance and verification in machine learning, new methods have recently been proposed that optimise learning systems to satisfy formally expressed logical properties. Among these methods, differentiable logics (DLs) are used to translate propositional or first-order formulae into loss functions deployed for optimisation in machine learning. At the same time, recent attempts to give programming language support for verification of neural networks showed that DLs can be used to compile verification properties to machine-learning backends. This situation is calling for stronger guarantees about the soundness of such compilers, the soundness and compositionality of DLs, and the differentiability and performance of the resulting loss functions. In this paper, we propose an approach to formalise existing DLs using the Mathematical Components library in the Coq proof assistant. Thanks to this formalisation, we are able to give uniform semantics to otherwise disparate DLs, give formal proofs to existing informal arguments, find errors in previous work, and provide formal proofs to missing conjectured properties. This work is meant as a stepping stone for the development of programming language support for verification of machine learning.

Subject Classification

ACM Subject Classification
  • Theory of computation → Type theory
  • Theory of computation → Logic and verification
  • Computing methodologies → Rule learning
Keywords
  • Machine Learning
  • Loss Functions
  • Differentiable Logics
  • Logic and Semantics
  • Interactive Theorem Proving

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Reynald Affeldt, Yves Bertot, Alessandro Bruni, Cyril Cohen, Marie Kerjean, Assia Mahboubi, Damien Rouhling, Pierre Roux, Kazuhiko Sakaguchi, Zachary Stone, Pierre-Yves Strub, and Laurent Théry. MathComp-Analysis: Mathematical Components compliant analysis library. https://github.com/math-comp/analysis, 2024. Since 2017. Version 1.2.0.
  2. Reynald Affeldt, Cyril Cohen, and Damien Rouhling. Formalization techniques for asymptotic reasoning in classical analysis. J. Formaliz. Reason., 11(1):43-76, 2018. URL: https://doi.org/10.6092/ISSN.1972-5787/8124.
  3. Aws Albarghouthi. Introduction to Neural Network Verification. verifieddeeplearning.com, 2021. URL: https://arxiv.org/abs/2109.10317.
  4. Andrei Aleksandrov and Kim Völlinger. Formalizing piecewise affine activation functions of neural networks in Coq. In 15th International NASA Symposium on Formal Methods (NFM 2023), Houston, TX, USA, May 16-18, 2023, volume 13903 of Lecture Notes in Computer Science, pages 62-78. Springer, 2023. URL: https://doi.org/10.1007/978-3-031-33170-1_4.
  5. Robert Atkey, Matthew L. Daggitt, and Wen Kokke. Vehicle formalisation, 2024. URL: https://github.com/vehicle-lang/vehicle-formalisation.
  6. Giorgio Bacci, Radu Mardare, Prakash Panangaden, and Gordon D. Plotkin. Propositional logics for the Lawvere quantale. In 39th Conference on the Mathematical Foundations of Programming Semantics (MFPS XXXIX), Indiana University, Bloomington, IN, USA, June 21-23, 2023, volume 3 of EPTICS. EpiSciences, 2023. URL: https://doi.org/10.46298/ENTICS.12292.
  7. Alexander Bagnall and Gordon Stewart. Certifying the true error: Machine learning in Coq with verified generalization guarantees. In The 33rd AAAI Conference on Artificial Intelligence (AAAI 2019), The 31st Innovative Applications of Artificial Intelligence Conference (IAAI 2019), The 9th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI 2019), Honolulu, Hawaii, USA, January 27-February 1, 2019, pages 2662-2669. AAAI Press, 2019. URL: https://doi.org/10.1609/AAAI.V33I01.33012662.
  8. Achim D. Brucker and Amy Stell. Verifying feedforward neural networks for classification in Isabelle/HOL. In 25th International Symposium on Formal Methods (FM 2023), Lübeck, Germany, March 6-10, 2023, volume 14000 of Lecture Notes in Computer Science, pages 427-444. Springer, 2023. URL: https://doi.org/10.1007/978-3-031-27481-7_24.
  9. Marco Casadio, Ekaterina Komendantskaya, Matthew L. Daggitt, Wen Kokke, Guy Katz, Guy Amir, and Idan Refaeli. Neural network robustness as a verification property: A principled case study. In 34th International Conference on Computer Aided Verification (CAV 2022), Haifa, Israel, August 7-10, 2022, Part I, volume 13371 of Lecture Notes in Computer Science, pages 219-231. Springer, 2022. URL: https://doi.org/10.1007/978-3-031-13185-1_11.
  10. Mark Chevallier, Matthew Whyte, and Jacques D. Fleuriot. Constrained training of neural networks via theorem proving (short paper). In 4th Workshop on Artificial Intelligence and Formal Verification, Logic, Automata, and Synthesis hosted by the 21st International Conference of the Italian Association for Artificial Intelligence (AIxIA 2022), Udine, Italy, November 28, 2022, volume 3311 of CEUR Workshop Proceedings, pages 7-12. CEUR-WS.org, 2022. URL: https://ceur-ws.org/Vol-3311/paper2.pdf.
  11. Matthew Daggitt, Wen Kokke, Ekaterina Komendantskaya, Robert Atkey, Luca Arnaboldi, Natalia Slusarz, Marco Casadio, Ben Coke, and Jeonghyeon Lee. The Vehicle tutorial: Neural network verification with Vehicle. In 6th Workshop on Formal Methods for ML-Enabled Autonomous Systems, volume 16 of Kalpa Publications in Computing, pages 1-5. EasyChair, 2023. URL: https://doi.org/10.29007/5s2x.
  12. Matthew L. Daggitt, Robert Atkey, Wen Kokke, Ekaterina Komendantskaya, and Luca Arnaboldi. Compiling higher-order specifications to SMT solvers: How to deal with rejection constructively. In 12th ACM SIGPLAN International Conference on Certified Programs and Proofs (CPP 2023), Boston, MA, USA, January 16-17, 2023, pages 102-120. ACM, 2023. URL: https://doi.org/10.1145/3573105.3575674.
  13. Matthew L. Daggitt, Wen Kokke, Robert Atkey, Natalia Slusarz, Luca Arnaboldi, and Ekaterina Komendantskaya. Vehicle: Bridging the embedding gap in the verification of neuro-symbolic programs, 2024. URL: https://arxiv.org/abs/2401.06379.
  14. Remi Desmartin, Omri Isac, Grant O. Passmore, Kathrin Stark, Ekaterina Komendantskaya, and Guy Katz. Towards a certified proof checker for deep neural network verification. In 33rd International Symposium Logic-Based Program Synthesis and Transformation (LOPSTR 2023), Cascais, Portugal, October 23-24, 2023, Proceedings, volume 14330 of Lecture Notes in Computer Science, pages 198-209. Springer, 2023. URL: https://doi.org/10.1007/978-3-031-45784-5_13.
  15. Remi Desmartin, Grant O. Passmore, Ekaterina Komendantskaya, and Matthew L. Daggitt. CheckINN: Wide range neural network verification in Imandra. In 24th International Symposium on Principles and Practice of Declarative Programming (PPDP 2022), Tbilisi, Georgia, September 20-22, 2022, pages 3:1-3:14. ACM, 2022. URL: https://doi.org/10.1145/3551357.3551372.
  16. Marc Fischer, Mislav Balunovic, Dana Drachsler-Cohen, Timon Gehr, Ce Zhang, and Martin T. Vechev. DL2: training and querying neural networks with logic. In 36th International Conference on Machine Learning (ICML 2019), 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 1931-1941. PMLR, 2019. URL: http://proceedings.mlr.press/v97/fischer19a.html.
  17. Julien Girard-Satabin, Michele Alberti, François Bobot, Zakaria Chihani, and Augustin Lemesle. CAISAR: A platform for characterizing artificial intelligence safety and robustness. In The IJCAI-ECAI-22 Workshop on Artificial Intelligence Safety (AISafety 2022), July 24-25, 2022, Vienna, Austria, CEUR-Workshop Proceedings, July 2022. URL: https://hal.archives-ouvertes.fr/hal-03687211.
  18. Eleonora Giunchiglia, Mihaela Catalina Stoian, and Thomas Lukasiewicz. Deep learning with logical constraints. In 31st International Joint Conference on Artificial Intelligence (IJCAI-22), pages 5478-5485. International Joint Conferences on Artificial Intelligence Organization, July 2022. Survey Track. URL: https://doi.org/10.24963/ijcai.2022/767.
  19. Zico Kolter and Aleksander Madry. Adversarial robustness - theory and practice. NeurIPS 2018 tutorial, 2018. Available at URL: https://adversarial-ml-tutorial.org/.
  20. Elisabetta De Maria, Abdorrahim Bahrami, Thibaud L'Yvonnet, Amy P. Felty, Daniel Gaffé, Annie Ressouche, and Franck Grammont. On the use of formal methods to model and verify neuronal archetypes. Frontiers Comput. Sci., 16(3):163404, 2022. URL: https://doi.org/10.1007/S11704-020-0029-6.
  21. Charlie Murphy, Patrick Gray, and Gordon Stewart. Verified perceptron convergence theorem. In 1st ACM SIGPLAN International Workshop on Machine Learning and Programming Languages, pages 43-50, 2017. Google Scholar
  22. Kazuhiko Sakaguchi. Reflexive tactics for algebra, revisited. In 13th International Conference on Interactive Theorem Proving (ITP 2022), August 7-10, 2022, Haifa, Israel, volume 237 of LIPIcs, pages 29:1-29:22. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2022. URL: https://doi.org/10.4230/LIPICS.ITP.2022.29.
  23. Kazuhiko Sakaguchi and Pierre Roux. Algebra tactics: Ring, field, lra, nra, and psatz tactics for Mathematical Components. https://github.com/math-comp/algebra-tactics, 2021. Last stable release: 1.2.3 (2024).
  24. Natalia Slusarz, Ekaterina Komendantskaya, Matthew L. Daggitt, Robert J. Stewart, and Kathrin Stark. Logic of differentiable logics: Towards a uniform semantics of DL. In 24th International Conference on Logic for Programming, Artificial Intelligence and Reasoning (LPAR 2023), Manizales, Colombia, June 4-9, 2023, volume 94 of EPiC Series in Computing, pages 473-493. EasyChair, 2023. URL: https://doi.org/10.29007/C1NT.
  25. Mathematical Components Team. Mathematical components library, 2007. Last stable version: 2.2.0 (2024). URL: https://github.com/math-comp/math-comp.
  26. Emile van Krieken, Erman Acar, and Frank van Harmelen. Analyzing differentiable fuzzy logic operators. Artif. Intell., 302:103602, 2022. URL: https://doi.org/10.1016/J.ARTINT.2021.103602.
  27. Péter Várnai and Dimos V. Dimarogonas. On robustness metrics for learning STL tasks. In 2020 American Control Conference (ACC 2020), Denver, CO, USA, July 1-3, 2020, pages 5394-5399. IEEE, 2020. URL: https://doi.org/10.23919/ACC45564.2020.9147692.
  28. J Łukasiewicz. O logice trójwartościowej (in Polish). English translation: On Three-Valued Logic, in Borkowski, L.(ed.) 1970. Jan Łukasiewicz: Selected Works, Amsterdam: North Holland. Ruch Filozoficzny, 1920. Google Scholar
  29. Natalia Ślusarz, Reynald Affeldt, and Alessandro Bruni. Formalisation of Differentiable Logics in Coq, 2024. Software, (visited on 2024-08-21). URL: https://github.com/ndslusarz/formal_LDL
    Software Heritage Logo archived version
    full metadata available at: https://doi.org/10.4230/artifacts.22503
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail