Security of Machine Learning (Dagstuhl Seminar 22281)

Authors Battista Biggio, Nicholas Carlini, Pavel Laskov, Konrad Rieck, Antonio Emanuele Cinà and all authors of the abstracts in this report



PDF
Thumbnail PDF

File

DagRep.12.7.41.pdf
  • Filesize: 1.64 MB
  • 21 pages

Document Identifiers

Author Details

Battista Biggio
  • University of Cagliary, IT
Nicholas Carlini
  • Google - Mountain View, US
Pavel Laskov
  • University of Liechtenstein - Vaduz, LI
Konrad Rieck
  • TU Braunschweig, DE
Antonio Emanuele Cinà
  • University of Venice, IT
and all authors of the abstracts in this report

Cite AsGet BibTex

Battista Biggio, Nicholas Carlini, Pavel Laskov, Konrad Rieck, and Antonio Emanuele Cinà. Security of Machine Learning (Dagstuhl Seminar 22281). In Dagstuhl Reports, Volume 12, Issue 7, pp. 41-61, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)
https://doi.org/10.4230/DagRep.12.7.41

Abstract

Machine learning techniques, especially deep neural networks inspired by mathematical models of human intelligence, have reached an unprecedented success on a variety of data analysis tasks. The reliance of critical modern technologies on machine learning, however, raises concerns on their security, especially since powerful attacks against mainstream learning algorithms have been demonstrated since the early 2010s. Despite a substantial body of related research, no comprehensive theory and design methodology is currently known for the security of machine learning. The proposed seminar aims at identifying potential research directions that could lead to building the scientific foundation for the security of machine learning. By bringing together researchers from machine learning and information security communities, the seminar is expected to generate new ideas for security assessment and design in the field of machine learning.

Subject Classification

ACM Subject Classification
  • Computer systems organization → Real-time operating systems
  • Computing methodologies → Machine learning
Keywords
  • adversarial machine learning
  • machine learning security

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail