Search Results

Documents authored by Laskov, Pavel


Document
Network Attack Detection and Defense - AI-Powered Threats and Responses (Dagstuhl Seminar 23431)

Authors: Sven Dietrich, Frank Kargl, Hartmut König, Pavel Laskov, and Artur Hermann

Published in: Dagstuhl Reports, Volume 13, Issue 10 (2024)


Abstract
This report documents the program and the findings of Dagstuhl Seminar 23431 "Network Attack Detection and Defense - AI-Powered Threats and Responses". With the emergence of artificial intelligence (AI), attack detection and defense are taking on a new level of quality. Artificial intelligence will promote further automation of attacks. There are already examples of this, such as the Deep Locker malware. It is expected that we will soon face a situation in which malware and attacks will become more and more automated, intelligent, and AI-powered. Consequently, today’s threat response systems will become more and more inadequate, especially when they rely on manual intervention of security experts and analysts. The main objective of the seminar was to assess the state of the art and potentials that AI advances create for both attackers and defenders. The seminar continued the series of Dagstuhl events "Network Attack Detection and Defense" held in 2008, 2012, 2014, and 2016. The objectives of the seminar were threefold, namely (1) to investigate various scenarios of AI-based malware and attacks, (2) to debate trust in AI and modeling of threats against AI, and (3) to propose methods and strategies for AI-powered network defenses. At the seminar, which brought together participants from academia and industry, we stated that recent advances in artificial intelligence have opened up new possibilities for each of these directions. In general, more and more researchers in networking and security look at AI-based methods which made this a timely event to assess and categorize the state of the art as well as work towards a roadmap for future research. The outcome of the discussions and the proposed research directions are presented in this report.

Cite as

Sven Dietrich, Frank Kargl, Hartmut König, Pavel Laskov, and Artur Hermann. Network Attack Detection and Defense - AI-Powered Threats and Responses (Dagstuhl Seminar 23431). In Dagstuhl Reports, Volume 13, Issue 10, pp. 90-129, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{dietrich_et_al:DagRep.13.10.90,
  author =	{Dietrich, Sven and Kargl, Frank and K\"{o}nig, Hartmut and Laskov, Pavel and Hermann, Artur},
  title =	{{Network Attack Detection and Defense - AI-Powered Threats and Responses (Dagstuhl Seminar 23431)}},
  pages =	{90--129},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2024},
  volume =	{13},
  number =	{10},
  editor =	{Dietrich, Sven and Kargl, Frank and K\"{o}nig, Hartmut and Laskov, Pavel and Hermann, Artur},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.13.10.90},
  URN =		{urn:nbn:de:0030-drops-198365},
  doi =		{10.4230/DagRep.13.10.90},
  annote =	{Keywords: artificial intelligence, cybersecurity, intrusion detection, machine learning}
}
Document
Security of Machine Learning (Dagstuhl Seminar 22281)

Authors: Battista Biggio, Nicholas Carlini, Pavel Laskov, Konrad Rieck, and Antonio Emanuele Cinà

Published in: Dagstuhl Reports, Volume 12, Issue 7 (2023)


Abstract
Machine learning techniques, especially deep neural networks inspired by mathematical models of human intelligence, have reached an unprecedented success on a variety of data analysis tasks. The reliance of critical modern technologies on machine learning, however, raises concerns on their security, especially since powerful attacks against mainstream learning algorithms have been demonstrated since the early 2010s. Despite a substantial body of related research, no comprehensive theory and design methodology is currently known for the security of machine learning. The proposed seminar aims at identifying potential research directions that could lead to building the scientific foundation for the security of machine learning. By bringing together researchers from machine learning and information security communities, the seminar is expected to generate new ideas for security assessment and design in the field of machine learning.

Cite as

Battista Biggio, Nicholas Carlini, Pavel Laskov, Konrad Rieck, and Antonio Emanuele Cinà. Security of Machine Learning (Dagstuhl Seminar 22281). In Dagstuhl Reports, Volume 12, Issue 7, pp. 41-61, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@Article{biggio_et_al:DagRep.12.7.41,
  author =	{Biggio, Battista and Carlini, Nicholas and Laskov, Pavel and Rieck, Konrad and Cin\`{a}, Antonio Emanuele},
  title =	{{Security of Machine Learning (Dagstuhl Seminar 22281)}},
  pages =	{41--61},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2023},
  volume =	{12},
  number =	{7},
  editor =	{Biggio, Battista and Carlini, Nicholas and Laskov, Pavel and Rieck, Konrad and Cin\`{a}, Antonio Emanuele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.12.7.41},
  URN =		{urn:nbn:de:0030-drops-176117},
  doi =		{10.4230/DagRep.12.7.41},
  annote =	{Keywords: adversarial machine learning, machine learning security}
}
Document
Machine Learning Methods for Computer Security (Dagstuhl Perspectives Workshop 12371)

Authors: Anthony D. Joseph, Pavel Laskov, Fabio Roli, J. Doug Tygar, and Blaine Nelson

Published in: Dagstuhl Manifestos, Volume 3, Issue 1 (2013)


Abstract
The study of learning in adversarial environments is an emerging discipline at the juncture between machine learning and computer security. The interest in learning-based methods for security- and system-design applications comes from the high degree of complexity of phenomena underlying the security and reliability of computer systems. As it becomes increasingly difficult to reach the desired properties solely using statically designed mechanisms, learning methods are being used more and more to obtain a better understanding of various data collected from these complex systems. However, learning approaches can be evaded by adversaries, who change their behavior in response to the learning methods. To-date, there has been limited research into learning techniques that are resilient to attacks with provable robustness guarantees The Perspectives Workshop, "Machine Learning Methods for Computer Security" was convened to bring together interested researchers from both the computer security and machine learning communities to discuss techniques, challenges, and future research directions for secure learning and learning-based security applications. As a result of the twenty-two invited presentations, workgroup sessions and informal discussion, several priority areas of research were identified. The open problems identified in the field ranged from traditional applications of machine learning in security, such as attack detection and analysis of malicious software, to methodological issues related to secure learning, especially the development of new formal approaches with provable security guarantees. Finally a number of other potential applications were pinpointed outside of the traditional scope of computer security in which security issues may also arise in connection with data-driven methods. Examples of such applications are social media spam, plagiarism detection, authorship identification, copyright enforcement, computer vision (particularly in the context of biometrics), and sentiment analysis.

Cite as

Anthony D. Joseph, Pavel Laskov, Fabio Roli, J. Doug Tygar, and Blaine Nelson. Machine Learning Methods for Computer Security (Dagstuhl Perspectives Workshop 12371). In Dagstuhl Manifestos, Volume 3, Issue 1, pp. 1-30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@Article{joseph_et_al:DagMan.3.1.1,
  author =	{Joseph, Anthony D. and Laskov, Pavel and Roli, Fabio and Tygar, J. Doug and Nelson, Blaine},
  title =	{{Machine Learning Methods for Computer Security (Dagstuhl Perspectives Workshop 12371)}},
  pages =	{1--30},
  journal =	{Dagstuhl Manifestos},
  ISSN =	{2193-2433},
  year =	{2013},
  volume =	{3},
  number =	{1},
  editor =	{Joseph, Anthony D. and Laskov, Pavel and Roli, Fabio and Tygar, J. Doug and Nelson, Blaine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagMan.3.1.1},
  URN =		{urn:nbn:de:0030-drops-43569},
  doi =		{10.4230/DagMan.3.1.1},
  annote =	{Keywords: Adversarial Learning, Computer Security, Robust Statistical Learning, Online Learning with Experts, Game Theory, Learning Theory}
}
Document
Machine Learning Methods for Computer Security (Dagstuhl Perspectives Workshop 12371)

Authors: Anthony D. Joseph, Pavel Laskov, Fabio Roli, J. Doug Tygar, and Blaine Nelson

Published in: Dagstuhl Reports, Volume 2, Issue 9 (2013)


Abstract
The study of learning in adversarial environments is an emerging discipline at the juncture between machine learning and computer security that raises new questions within both fields. The interest in learning-based methods for security and system design applications comes from the high degree of complexity of phenomena underlying the security and reliability of computer systems. As it becomes increasingly difficult to reach the desired properties by design alone, learning methods are being used to obtain a better understanding of various data collected from these complex systems. However, learning approaches can be co-opted or evaded by adversaries, who change to counter them. To-date, there has been limited research into learning techniques that are resilient to attacks with provable robustness guarantees making the task of designing secure learning-based systems a lucrative open research area with many challenges. The Perspectives Workshop, ``Machine Learning Methods for Computer Security'' was convened to bring together interested researchers from both the computer security and machine learning communities to discuss techniques, challenges, and future research directions for secure learning and learning-based security applications. This workshop featured twenty-two invited talks from leading researchers within the secure learning community covering topics in adversarial learning, game-theoretic learning, collective classification, privacy-preserving learning, security evaluation metrics, digital forensics, authorship identification, adversarial advertisement detection, learning for offensive security, and data sanitization. The workshop also featured workgroup sessions organized into three topic: machine learning for computer security, secure learning, and future applications of secure learning.

Cite as

Anthony D. Joseph, Pavel Laskov, Fabio Roli, J. Doug Tygar, and Blaine Nelson. Machine Learning Methods for Computer Security (Dagstuhl Perspectives Workshop 12371). In Dagstuhl Reports, Volume 2, Issue 9, pp. 109-130, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@Article{joseph_et_al:DagRep.2.9.109,
  author =	{Joseph, Anthony D. and Laskov, Pavel and Roli, Fabio and Tygar, J. Doug and Nelson, Blaine},
  title =	{{Machine Learning Methods for Computer Security (Dagstuhl Perspectives Workshop 12371)}},
  pages =	{109--130},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2013},
  volume =	{2},
  number =	{9},
  editor =	{Joseph, Anthony D. and Laskov, Pavel and Roli, Fabio and Tygar, J. Doug and Nelson, Blaine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.2.9.109},
  URN =		{urn:nbn:de:0030-drops-37908},
  doi =		{10.4230/DagRep.2.9.109},
  annote =	{Keywords: Adversarial Learning, Computer Security, Robust Statistical Learning, Online Learning with Experts, Game Theory, Learning Theory}
}
Document
8. 08102 Manifesto – Perspectives Workshop: Network Attack Detection and Defense

Authors: Georg Carle, Falko Dressler, Richard A. Kemmerer, Hartmut Koenig, Christopher Kruegel, and Pavel Laskov

Published in: Dagstuhl Seminar Proceedings, Volume 8102, Perspectives Workshop: Network Attack Detection and Defense (2008)


Abstract
This manifesto is the result of the Perspective Workshop Network Attack Detection and Defense held in Schloss Dagstuhl (Germany) from March 2nd – 6th, 2008. The participants of the workshop represent researchers from Austria, France, Norway, the Switzerland, the United States, and Germany who work actively in the field of intrusion detection and network monitoring. The workshop attendee’s opinion was that intrusion detection and flow analysis, which have been developed as complementary approaches for the detection of network attacks, should more strongly combine event detection and correlation techniques to better meet future challenges in future reactive security. The workshop participants considered various perspectives to envision future network attack detection and defense. The following topics are seen as important in the future: the development of early warning systems, the introduction of situation awareness, the improvement of measurement technology, taxonomy of attacks, the application of intrusion and fraud detection for web services, and anomaly detection. In order to realize those visions the state of the art, the challenges, and research priorities were identified for each topic by working groups. The outcome of the discussion is summarized in working group papers which are published in the workshop proceedings. The papers were compiled by the editors to this manifesto.

Cite as

Georg Carle, Falko Dressler, Richard A. Kemmerer, Hartmut Koenig, Christopher Kruegel, and Pavel Laskov. 8. 08102 Manifesto – Perspectives Workshop: Network Attack Detection and Defense. In Perspectives Workshop: Network Attack Detection and Defense. Dagstuhl Seminar Proceedings, Volume 8102, pp. 1-16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{carle_et_al:DagSemProc.08102.8,
  author =	{Carle, Georg and Dressler, Falko and Kemmerer, Richard A. and Koenig, Hartmut and Kruegel, Christopher and Laskov, Pavel},
  title =	{{8. 08102 Manifesto – Perspectives Workshop: Network Attack Detection and Defense}},
  booktitle =	{Perspectives Workshop: Network Attack Detection and Defense},
  pages =	{1--16},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8102},
  editor =	{Georg Carle and Falko Dressler and Richard A. Kemmerer and Hartmut K\"{o}nig and Christopher Kruegel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08102.8},
  URN =		{urn:nbn:de:0030-drops-14917},
  doi =		{10.4230/DagSemProc.08102.8},
  annote =	{Keywords: Manifesto of the Dagstuhl Perspective Workshop, March 2nd - 6th, 2008}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail