DNN Verification, Reachability, and the Exponential Function Problem

Authors Omri Isac, Yoni Zohar, Clark Barrett, Guy Katz



PDF
Thumbnail PDF

File

LIPIcs.CONCUR.2023.26.pdf
  • Filesize: 0.65 MB
  • 18 pages

Document Identifiers

Author Details

Omri Isac
  • The Hebrew University of Jerusalem, Israel
Yoni Zohar
  • Bar-Ilan University, Ramat Gan, Israel
Clark Barrett
  • Stanford University, CA, USA
Guy Katz
  • The Hebrew University of Jerusalem, Israel

Cite AsGet BibTex

Omri Isac, Yoni Zohar, Clark Barrett, and Guy Katz. DNN Verification, Reachability, and the Exponential Function Problem. In 34th International Conference on Concurrency Theory (CONCUR 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 279, pp. 26:1-26:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)
https://doi.org/10.4230/LIPIcs.CONCUR.2023.26

Abstract

Deep neural networks (DNNs) are increasingly being deployed to perform safety-critical tasks. The opacity of DNNs, which prevents humans from reasoning about them, presents new safety and security challenges. To address these challenges, the verification community has begun developing techniques for rigorously analyzing DNNs, with numerous verification algorithms proposed in recent years. While a significant amount of work has gone into developing these verification algorithms, little work has been devoted to rigorously studying the computability and complexity of the underlying theoretical problems. Here, we seek to contribute to the bridging of this gap. We focus on two kinds of DNNs: those that employ piecewise-linear activation functions (e.g., ReLU), and those that employ piecewise-smooth activation functions (e.g., Sigmoids). We prove the two following theorems: (i) the decidability of verifying DNNs with a particular set of piecewise-smooth activation functions, including Sigmoid and tanh, is equivalent to a well-known, open problem formulated by Tarski; and (ii) the DNN verification problem for any quantifier-free linear arithmetic specification can be reduced to the DNN reachability problem, whose approximation is NP-complete. These results answer two fundamental questions about the computability and complexity of DNN verification, and the ways it is affected by the network’s activation functions and error tolerance; and could help guide future efforts in developing DNN verification tools.

Subject Classification

ACM Subject Classification
  • Theory of computation
Keywords
  • Formal Verification
  • Computability Theory
  • Deep Neural Networks

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. M. Akintunde, E. Botoeva, P. Kouvaros, and A. Lomuscio. Formal Verification of Neural Agents in Non-Deterministic Environments. Autonomous Agents and Multi-Agent Systems, 36:1-36, 2022. Google Scholar
  2. M. Akintunde, A. Kevorchian, A. Lomuscio, and E. Pirovano. Verification of RNN-Based Neural Agent-Environment Systems. In Proc. 33rd AAAI Conf. on Artificial Intelligence (AAAI), pages 197-210, 2019. Google Scholar
  3. M. Akintunde, A. Lomuscio, L. Maganti, and E. Pirovano. Reachability Analysis for Neural Agent-Environment Systems. In Proc. 16th Int. Conf. on Principles of Knowledge Representation and Reasoning, 2018. Google Scholar
  4. G. Avni, R. Bloem, K. Chatterjee, T. Henzinger, B. Konighofer, and S. Pranger. Run-Time Optimization for Learned Controllers through Quantitative Games. In Proc. 31st Int. Conf. on Computer Aided Verification (CAV), pages 630-649, 2019. Google Scholar
  5. Caglar Aytekin. Neural Networks are Decision Trees, 2022. Technical Report. URL: https://arxiv.org/abs/2210.05189.
  6. T. Baluta, S. Shen, S. Shinde, K. Meel, and P. Saxena. Quantitative Verification of Neural Networks And its Security Applications. In Proc. ACM SIGSAC Conf. on Computer and Communications Security (CCS), pages 1249-1264, 2019. Google Scholar
  7. Gilles Barthe, Rohit Chadha, Paul Krogmeier, A Prasad Sistla, and Mahesh Viswanathan. Deciding Accuracy of Differential Privacy Schemes. In Proc. ACM on Programming Languages (POPL), pages 1-30, 2021. Google Scholar
  8. M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba. End to End Learning for Self-Driving Cars, 2016. Technical Report. URL: https://arxiv.org/abs/1604.07316.
  9. Jérémie Cabessa. Turing Complete Neural Computation Based on Synaptic Plasticity. PloS one, 14(10), 2019. Google Scholar
  10. OpenAI. ChatGPT: Optimizing Language Models for Dialogue, 2022. URL: https://openai.com/blog/chatgpt.
  11. Mingzhe Chen, Ursula Challita, Walid Saad, Changchuan Yin, and Mérouane Debbah. Artificial Neural Networks-Based Machine Learning for Wireless Networks: A Tutorial. IEEE Communications Surveys & Tutorials, 21(4):3039-3071, 2019. Google Scholar
  12. S. Dubey, S. Singh, and B. Chaudhuri. Activation Functions in Deep Learning: A Comprehensive Survey and Benchmark. Neurocomputing, 2022. Google Scholar
  13. Yizhak Yisrael Elboher, Justin Gottschlich, and Guy Katz. An Abstraction-Based Framework for Neural Network Verification. In Proc. 32nd Intl. Conf. Computer Aided Verification (CAV), pages 43-65, 2020. Google Scholar
  14. A. Esteva, A. Robicquet, B. Ramsundar, V. Kuleshov, M. DePristo, K. Chou, C. Cui, G. Corrado, S. Thrun, and J. Dean. A Guide to Deep Learning in Healthcare. Nature Medicine, 25(1):24-29, 2019. Google Scholar
  15. James Ferlez and Yasser Shoukry. Bounding the Complexity of Formally Verifying Neural Networks: a Geometric Approach. In Proc. 60th IEEE Conf. on Decision and Control (CDC), pages 5104-5109, 2021. Google Scholar
  16. Peter Franek, Stefan Ratschan, and Piotr Zgliczynski. Satisfiability of Systems of Equations of Real Analytic Functions is Quasi-Decidable. In Proc. 36th Int. Symposium on Mathematical Foundations of Computer Science (MFCS), pages 315-326, 2011. Google Scholar
  17. D. Fremont, J. Chiu, D. Margineantu, D. Osipychev, and S. Seshia. Formal Analysis and Redesign of a Neural Network-Based Aircraft Taxiing System with VerifAI. In Proc. 32nd Int. Conf. on Computer Aided Verification (CAV), pages 122-134, 2020. Google Scholar
  18. T. Gehr, M. Mirman, D. Drachsler-Cohen, E. Tsankov, S. Chaudhuri, and M. Vechev. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation. In Proc. 39th IEEE Symposium on Security and Privacy (S&P), 2018. Google Scholar
  19. S. Ghilardi. Model-Theoretic Methods in Combined Constraint Satisfiability. Journal of Automated Reasoning, 33:221-249, 2004. Google Scholar
  20. I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning. MIT Press, 2016. Google Scholar
  21. I. Goodfellow, J. Shlens, and C Szegedy. Explaining and Harnessing Adversarial Examples, 2014. Technical Report. URL: https://arxiv.org/abs/1412.6572.
  22. Dario Guidotti, Luca Pulina, and Armando Tacchella. pyNeVer: A Framework for Learning and Verification of Neural Networks. In Proc. 19th Int. Symposium Automated Technology for Verification and Analysis (ATVA), pages 357-363, 2021. Google Scholar
  23. P. Henriksen and A. Lomuscio. Efficient Neural Network Verification via Adaptive Refinement and Adversarial Search. In Proc. 24th European Conf. on Artificial Intelligence (ECAI), pages 2513-2520, 2020. Google Scholar
  24. T. Henzinger, M. Lechner, and Đ. Žikelić. Scalable Verification of Quantized Neural Networks. In Proc. AAAI Conf. on Artificial Intelligence, pages 3787-3795, 2021. Google Scholar
  25. X. Huang, M. Kwiatkowska, S. Wang, and M. Wu. Safety Verification of Deep Neural Networks. In Proc. 29th Int. Conf. on Computer Aided Verification (CAV), pages 3-29, 2017. Google Scholar
  26. Omri Isac, Yoni Zohar, Clark Barrett, and Guy Katz. DNN Verification, Reachability, and the Exponential Function Problem, 2023. Technical Report. URL: https://arxiv.org/abs/2305.06064.
  27. R. Ivanov, J. Weimer, R. Alur, G. Pappas, and I. Lee. Verisig: Verifying Safety Properties of Hybrid Systems with Neural Network Controllers. In Proc. 22nd ACM Int. Conf. on Hybrid Systems: Computation and Control (HSCC), pages 169-178, 2019. Google Scholar
  28. G. Katz, C. Barrett, D. Dill, K. Julian, and M. Kochenderfer. Reluplex: a Calculus for Reasoning about Deep Neural Networks. Formal Methods in System Design (FMSD), 2021. Google Scholar
  29. X. Liu, L. Xie, Y. Wang, J. Zou, J. Xiong, Z. Ying, and A. Vasilakos. Privacy and Security Issues in Deep Learning: A Survey. IEEE Access, 9:4566-4593, 2020. Google Scholar
  30. Yang Liu, Jianpeng Zhang, Chao Gao, Jinghua Qu, and Lixin Ji. Natural-Logarithm-Rectified Activation Function in Convolutional Neural Networks. In Proc. 5th Int. Conf. on Computer and Communications (ICCC), pages 2000-2008, 2019. Google Scholar
  31. Alessio Lomuscio and Lalit Maganti. An Approach to Reachability Analysis for Feed-Forward ReLU Neural Networks, 2017. Technical Report. URL: https://arxiv.org/abs/1706.07351.
  32. A. Lukina, C. Schilling, and T. Henzinger. Into the Unknown: Active Monitoring of Neural Networks. In Proc. 21st Int. Conf. Runtime Verification (RV), pages 42-61, 2021. Google Scholar
  33. Z. Lyu, C.-Y. Ko, Z. Kong, N. Wong, D. Lin, and L. Daniel. Fastened Crown: Tightened Neural Network Robustness Certificates. In Proc. 34th AAAI Conf. on Artificial Intelligence (AAAI), pages 5037-5044, 2020. Google Scholar
  34. A. Macintyre and A. Wilkie. On the Decidability of the Real Exponential Field. Kreisel’s Mathematics, 115:451, 1996. Google Scholar
  35. M. Müller, C. Brix, S. Bak, C. Liu, and T. Johnson. The Third International Verification of Neural Networks Competition (VNN-COMP 2022): Summary and Results, 2022. Technical Report. URL: https://arxiv.org/abs/2212.10376.
  36. M. Müller, G. Makarchuk, G. Singh, M. Püschel, and M. Vechev. PRIMA: General and Precise Neural Network Certification via Scalable Convex Hull Approximations. In Proc. 49th ACM SIGPLAN Symposium on Principles of Programming Languages (POPL), pages 1-33, 2022. Google Scholar
  37. N. Narodytska, S. Kasiviswanathan, L. Ryzhyk, M. Sagiv, and T. Walsh. Verifying Properties of Binarized Deep Neural Networks, 2017. Technical Report. URL: https://arxiv.org/abs/1709.06662.
  38. G. Nelson and D. Oppen. Simplification by Cooperating Decision Procedures. ACM Transactions on Programming Languages and Systems (TOPLAS), 1(2):245-257, 1979. Google Scholar
  39. Enrica Nicolini, Christophe Ringeissen, and Michaël Rusinowitch. Data Structures with Arithmetic Constraints: A Non-Disjoint Combination. In Proc. 7th Int. Symposium of Frontiers of Combining Systems (FroCoS), pages 319-334, 2009. Google Scholar
  40. M. Ostrovsky, C. Barrett, and G. Katz. An Abstraction-Refinement Approach to Verifying Convolutional Neural Networks. In Proc. 20th. Int. Symposium on Automated Technology for Verification and Analysis (ATVA), pages 391-396, 2022. Google Scholar
  41. L. Pulina and A. Tacchella. An Abstraction-Refinement Approach to Verification of Artificial Neural Networks. In Proc. 22nd Int. Conf. on Computer Aided Verification (CAV), pages 243-257, 2010. Google Scholar
  42. D. Richardson. Some Undecidable Problems Involving Elementary Functions of a Real Variable. The Journal of Symbolic Logic, 33(4):514-520, 1969. Google Scholar
  43. Christophe Ringeissen. Cooperation of Decision Procedures for the Satisfiability Problem. In Proc. 1st Int. Workshop of Frontiers of Combining Systems (FroCoS), pages 121-139, 1996. Google Scholar
  44. Wenjie Ruan, Xiaowei Huang, and Marta Kwiatkowska. Reachability Analysis of Deep Neural Networks with Provable Guarantees, 2018. Technical Report. URL: https://arxiv.org/abs/1805.02242.
  45. Marco Sälzer and Martin Lange. Reachability Is NP-Complete Even for the Simplest Neural Networks. In Proc. 15th Int. Conf. on Reachability Problems (RP), pages 149-164, 2021. Google Scholar
  46. Marco Sälzer and Martin Lange. Fundamental Limits in Formal Verification of Message-Passing Neural Networks. In Proc. 11th Int. Conf. on Learning Representations (ICLR), 2023. Google Scholar
  47. S. Sankaranarayanan, S. Dutta, and S. Mover. Reaching Out towards Fully Verified Autonomous Systems. In Proc. 13th Int. Conf. on Reachability Problems (RP), pages 22-32, 2019. Google Scholar
  48. J. Shoenfield. Mathematical Logic. Addison-Wesley publishing, 1967. Google Scholar
  49. G. Singh, T. Gehr, M. Püschel, and M. Vechev. An Abstract Domain for Certifying Neural Networks. In Proc. 46th ACM SIGPLAN Symposium on Principles of Programming Languages (POPL), pages 1-30, 2019. Google Scholar
  50. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing Properties of Neural Networks, 2013. Technical Report. URL: https://arxiv.org/abs/1312.6199.
  51. Alfred Tarski. Project Rand: A Decision Method for Elementary Algebra and Geometry. Rand Corporation, 1948. Google Scholar
  52. Alfred Tarski. A Decision Method for Elementary Algebra and Geometry. Journal of Symbolic Logic, 17(3):24-84, 1952. Google Scholar
  53. Cesare Tinelli and Christophe Ringeissen. Unions of Non-Disjoint Theories and Combinations of Satisfiability Procedures. Theoretical Computer Science, 290(1):291-353, 2003. Google Scholar
  54. H.-D. Tran, S. Bak, W. Xiang, and T. Johnson. Verification of Deep Convolutional Neural Networks Using ImageStars. In Proc. 32nd Int. Conf. on Computer Aided Verification (CAV), pages 18-42, 2020. Google Scholar
  55. A. Turing. Computing Machinery and Intelligence. Mind, LIX(236), 1950. Google Scholar
  56. Mattia Villani and Nandi Schoots. Any Deep ReLU Network is Shallow, 2023. Technical Report. URL: https://arxiv.org/abs/2306.11827.
  57. S. Wang, K. Pei, J. Whitehouse, J. Yang, and S. Jana. Formal Security Analysis of Neural Networks using Symbolic Intervals. In Proc. 27th USENIX Security Symposium, 2018. Google Scholar
  58. Alex Wilkie. Model Completeness Results for Expansions of the Ordered Field of Real Numbers by Restricted Pfaffian Functions and the Exponential Function. Journal of the American Mathematical Society, 9(4):1051-1094, 1996. Google Scholar
  59. H. Wu, A. Zeljić, G. Katz, and C. Barrett. Efficient Neural Network Analysis with Sum-of-Infeasibilities. In Proc. 28th Int. Conf. on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), pages 143-163, 2022. Google Scholar
  60. Adrian Wurm. Complexity of Reachability Problems in Neural Networks, 2023. Technical Report. URL: https://arxiv.org/abs/2306.05818.
  61. H. Zhang, M. Shinn, A. Gupta, A. Gurfinkel, N. Le, and N. Narodytska. Verification of Recurrent Neural Networks for Cognitive Tasks via Reachability Analysis. In Proc. 24th European Conf. on Artificial Intelligence (ECAI), pages 1690-1697, 2020. Google Scholar
  62. Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph Neural Networks: A Review of Methods and Applications. AI open, 1:57-81, 2020. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail