Safety Verification for Deep Neural Networks with Provable Guarantees (Invited Paper)

Author Marta Z. Kwiatkowska

Thumbnail PDF


  • Filesize: 1.36 MB
  • 5 pages

Document Identifiers

Author Details

Marta Z. Kwiatkowska
  • Department of Computer Science, University of Oxford, UK

Cite AsGet BibTex

Marta Z. Kwiatkowska. Safety Verification for Deep Neural Networks with Provable Guarantees (Invited Paper). In 30th International Conference on Concurrency Theory (CONCUR 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 140, pp. 1:1-1:5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Computing systems are becoming ever more complex, increasingly often incorporating deep learning components. Since deep learning is unstable with respect to adversarial perturbations, there is a need for rigorous software development methodologies that encompass machine learning. This paper describes progress with developing automated verification techniques for deep neural networks to ensure safety and robustness of their decisions with respect to input perturbations. This includes novel algorithms based on feature-guided search, games, global optimisation and Bayesian methods.

Subject Classification

ACM Subject Classification
  • Theory of computation → Logic and verification
  • Computing methodologies → Neural networks
  • Neural networks
  • robustness
  • formal verification
  • Bayesian neural networks


  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    PDF Downloads


  1. Battista Biggio and Fabio Roli. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84:317-331, 2018. Google Scholar
  2. Arno Blaas, Luca Laurenti, Andrea Patane, Luca Cardelli, Marta Kwiatkowska, and Stephen J. Roberts. Robustness Quantification for Classification with Gaussian Processes. CoRR abs/1905.11876, 2019. URL:
  3. Luca Cardelli, Marta Kwiatkowska, Luca Laurenti, Nicola Paoletti, Andrea Patane, and Matthew Wicker. Statistical Guarantees for the Robustness of Bayesian Neural Networks. In IJCAI 2019, 2018. See URL:
  4. Luca Cardelli, Marta Kwiatkowska, Luca Laurenti, and Andrea Patane. Robustness guarantees for Bayesian inference with Gaussian processes. In AAAI 2019, 2018. See URL:
  5. Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In Security and Privacy (SP), 2017 IEEE Symposium on, pages 39-57. IEEE, 2017. Google Scholar
  6. Yarin Gal. Uncertainty in deep learning. PhD thesis, University of Cambridge, 2016. Google Scholar
  7. Xiaowei Huang, Marta Kwiatkowska, Sen Wang, and Min Wu. Safety verification of deep neural networks. In CAV, pages 3-29. Springer, 2017. Google Scholar
  8. Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel J Kochenderfer. Reluplex: An efficient SMT solver for verifying deep neural networks. In CAV, pages 97-117. Springer, 2017. Google Scholar
  9. David G Lowe. Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2):91-110, 2004. Google Scholar
  10. David JC MacKay. A practical Bayesian framework for backpropagation networks. Neural computation, 4(3):448-472, 1992. Google Scholar
  11. A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards Deep Learning Models Resistant to Adversarial Attacks. arXiv e-prints, June 2017. URL:
  12. Matthew Mirman, Timon Gehr, and Martin Vechev. Differentiable Abstract Interpretation for Provably Robust Neural Networks. In ICML 2018, pages 3578-3586, 2018. Google Scholar
  13. Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. In Security and Privacy (EuroS&P), 2016 IEEE European Symposium on, pages 372-387. IEEE, 2016. Google Scholar
  14. Luca Pulina and Armando Tacchella. An abstraction-refinement approach to verification of artificial neural networks. In CAV, pages 243-257. Springer, 2010. Google Scholar
  15. Wenjie Ruan, Xiaowei Huang, and Marta Kwiatkowska. Reachability analysis of deep neural networks with provable guarantees. In IJCAI, pages 2651-2659. AAAI Press, 2018. Google Scholar
  16. Youcheng Sun, Min Wu, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska, and Daniel Kroening. Concolic Testing for Deep Neural Networks. In ASE 2018, pages 109-119, 2018. Google Scholar
  17. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In ICLR, 2014. Google Scholar
  18. Vincent Tjeng, Kai Xiao, and Russ Tedrake. Evaluating Robustness of Neural Networks with Mixed Integer Programming. CoRR abs/1711.07356, 2017. URL:
  19. Matthew Wicker, Xiaowei Huang, and Marta Kwiatkowska. Feature-guided black-box safety testing of deep neural networks. In TACAS, pages 408-426. Springer, 2018. Google Scholar
  20. Min Wu, Matthew Wicker, Wenjie Ruan, Xiaowei Huang, and Marta Kwiatkowska. A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees. Theoretical Computer Science, 2018. To appear. See URL:
  21. Håkan LS Younes, Marta Kwiatkowska, Gethin Norman, and David Parker. Numerical vs. statistical probabilistic model checking. International Journal on Software Tools for Technology Transfer, 8(3):216-228, 2006. Google Scholar
Questions / Remarks / Feedback

Feedback for Dagstuhl Publishing

Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail