Document Open Access Logo

Safety Verification for Deep Neural Networks with Provable Guarantees (Invited Paper)

Author Marta Z. Kwiatkowska

Thumbnail PDF


  • Filesize: 1.36 MB
  • 5 pages

Document Identifiers

Author Details

Marta Z. Kwiatkowska
  • Department of Computer Science, University of Oxford, UK

Cite AsGet BibTex

Marta Z. Kwiatkowska. Safety Verification for Deep Neural Networks with Provable Guarantees (Invited Paper). In 30th International Conference on Concurrency Theory (CONCUR 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 140, pp. 1:1-1:5, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2019)


Computing systems are becoming ever more complex, increasingly often incorporating deep learning components. Since deep learning is unstable with respect to adversarial perturbations, there is a need for rigorous software development methodologies that encompass machine learning. This paper describes progress with developing automated verification techniques for deep neural networks to ensure safety and robustness of their decisions with respect to input perturbations. This includes novel algorithms based on feature-guided search, games, global optimisation and Bayesian methods.

Subject Classification

ACM Subject Classification
  • Theory of computation → Logic and verification
  • Computing methodologies → Neural networks
  • Neural networks
  • robustness
  • formal verification
  • Bayesian neural networks


  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    PDF Downloads
Questions / Remarks / Feedback

Feedback for Dagstuhl Publishing

Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail