Search Results

Documents authored by Tygar, J. Doug


Document
Machine Learning Methods for Computer Security (Dagstuhl Perspectives Workshop 12371)

Authors: Anthony D. Joseph, Pavel Laskov, Fabio Roli, J. Doug Tygar, and Blaine Nelson

Published in: Dagstuhl Manifestos, Volume 3, Issue 1 (2013)


Abstract
The study of learning in adversarial environments is an emerging discipline at the juncture between machine learning and computer security. The interest in learning-based methods for security- and system-design applications comes from the high degree of complexity of phenomena underlying the security and reliability of computer systems. As it becomes increasingly difficult to reach the desired properties solely using statically designed mechanisms, learning methods are being used more and more to obtain a better understanding of various data collected from these complex systems. However, learning approaches can be evaded by adversaries, who change their behavior in response to the learning methods. To-date, there has been limited research into learning techniques that are resilient to attacks with provable robustness guarantees The Perspectives Workshop, "Machine Learning Methods for Computer Security" was convened to bring together interested researchers from both the computer security and machine learning communities to discuss techniques, challenges, and future research directions for secure learning and learning-based security applications. As a result of the twenty-two invited presentations, workgroup sessions and informal discussion, several priority areas of research were identified. The open problems identified in the field ranged from traditional applications of machine learning in security, such as attack detection and analysis of malicious software, to methodological issues related to secure learning, especially the development of new formal approaches with provable security guarantees. Finally a number of other potential applications were pinpointed outside of the traditional scope of computer security in which security issues may also arise in connection with data-driven methods. Examples of such applications are social media spam, plagiarism detection, authorship identification, copyright enforcement, computer vision (particularly in the context of biometrics), and sentiment analysis.

Cite as

Anthony D. Joseph, Pavel Laskov, Fabio Roli, J. Doug Tygar, and Blaine Nelson. Machine Learning Methods for Computer Security (Dagstuhl Perspectives Workshop 12371). In Dagstuhl Manifestos, Volume 3, Issue 1, pp. 1-30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@Article{joseph_et_al:DagMan.3.1.1,
  author =	{Joseph, Anthony D. and Laskov, Pavel and Roli, Fabio and Tygar, J. Doug and Nelson, Blaine},
  title =	{{Machine Learning Methods for Computer Security (Dagstuhl Perspectives Workshop 12371)}},
  pages =	{1--30},
  journal =	{Dagstuhl Manifestos},
  ISSN =	{2193-2433},
  year =	{2013},
  volume =	{3},
  number =	{1},
  editor =	{Joseph, Anthony D. and Laskov, Pavel and Roli, Fabio and Tygar, J. Doug and Nelson, Blaine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagMan.3.1.1},
  URN =		{urn:nbn:de:0030-drops-43569},
  doi =		{10.4230/DagMan.3.1.1},
  annote =	{Keywords: Adversarial Learning, Computer Security, Robust Statistical Learning, Online Learning with Experts, Game Theory, Learning Theory}
}
Document
Machine Learning Methods for Computer Security (Dagstuhl Perspectives Workshop 12371)

Authors: Anthony D. Joseph, Pavel Laskov, Fabio Roli, J. Doug Tygar, and Blaine Nelson

Published in: Dagstuhl Reports, Volume 2, Issue 9 (2013)


Abstract
The study of learning in adversarial environments is an emerging discipline at the juncture between machine learning and computer security that raises new questions within both fields. The interest in learning-based methods for security and system design applications comes from the high degree of complexity of phenomena underlying the security and reliability of computer systems. As it becomes increasingly difficult to reach the desired properties by design alone, learning methods are being used to obtain a better understanding of various data collected from these complex systems. However, learning approaches can be co-opted or evaded by adversaries, who change to counter them. To-date, there has been limited research into learning techniques that are resilient to attacks with provable robustness guarantees making the task of designing secure learning-based systems a lucrative open research area with many challenges. The Perspectives Workshop, ``Machine Learning Methods for Computer Security'' was convened to bring together interested researchers from both the computer security and machine learning communities to discuss techniques, challenges, and future research directions for secure learning and learning-based security applications. This workshop featured twenty-two invited talks from leading researchers within the secure learning community covering topics in adversarial learning, game-theoretic learning, collective classification, privacy-preserving learning, security evaluation metrics, digital forensics, authorship identification, adversarial advertisement detection, learning for offensive security, and data sanitization. The workshop also featured workgroup sessions organized into three topic: machine learning for computer security, secure learning, and future applications of secure learning.

Cite as

Anthony D. Joseph, Pavel Laskov, Fabio Roli, J. Doug Tygar, and Blaine Nelson. Machine Learning Methods for Computer Security (Dagstuhl Perspectives Workshop 12371). In Dagstuhl Reports, Volume 2, Issue 9, pp. 109-130, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@Article{joseph_et_al:DagRep.2.9.109,
  author =	{Joseph, Anthony D. and Laskov, Pavel and Roli, Fabio and Tygar, J. Doug and Nelson, Blaine},
  title =	{{Machine Learning Methods for Computer Security (Dagstuhl Perspectives Workshop 12371)}},
  pages =	{109--130},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2013},
  volume =	{2},
  number =	{9},
  editor =	{Joseph, Anthony D. and Laskov, Pavel and Roli, Fabio and Tygar, J. Doug and Nelson, Blaine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.2.9.109},
  URN =		{urn:nbn:de:0030-drops-37908},
  doi =		{10.4230/DagRep.2.9.109},
  annote =	{Keywords: Adversarial Learning, Computer Security, Robust Statistical Learning, Online Learning with Experts, Game Theory, Learning Theory}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail