6 Search Results for "Slavkovik, Marija"


Document
Roadmap for Responsible Robotics (Dagstuhl Seminar 23371)

Authors: Michael Fisher, Marija Slavkovik, Anna Dobrosovestnova, and Nick Schuster

Published in: Dagstuhl Reports, Volume 13, Issue 9 (2024)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 23371 "Roadmap for Responsible Robotics". The seminar was concerned with robots across all their forms, particularly autonomous robots capable of making their own decisions and taking their own actions without direct human oversight. The seminar brought together experts in computer science, robotics, engineering, philosophy, cognitive science, human-robot interactions, as well as representatives of the industry, with the aim of contributing to the steps towards ethical and responsible robotic systems as initiated by actors such as the European Robotics Research Network (EURON), the European Union’s REELER, and others. We discussed topics including: "Why do autonomous robots warrant distinct normative considerations?", "Which stakeholders are, or should be, involved in the development and deployment of robotic systems, and how do we configure their responsibilities?", "What are the principal tenets of responsible robotics beyond commonly associated themes, namely trust, fairness, predictability and understandability?". Through intensive discussions of these and other related questions, motivated by the various values at stake as robotic systems become increasingly present and impactful in human life, this interdisciplinary group identified a set of interrelated priorities to guide future research and regulatory efforts. The resulting roadmap aimed to ensure that robotic systems co-evolve with human societies so as to advance, rather than undermine, human agency and humane values.

Cite as

Michael Fisher, Marija Slavkovik, Anna Dobrosovestnova, and Nick Schuster. Roadmap for Responsible Robotics (Dagstuhl Seminar 23371). In Dagstuhl Reports, Volume 13, Issue 9, pp. 103-115, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{fisher_et_al:DagRep.13.9.103,
  author =	{Fisher, Michael and Slavkovik, Marija and Dobrosovestnova, Anna and Schuster, Nick},
  title =	{{Roadmap for Responsible Robotics (Dagstuhl Seminar 23371)}},
  pages =	{103--115},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2024},
  volume =	{13},
  number =	{9},
  editor =	{Fisher, Michael and Slavkovik, Marija and Dobrosovestnova, Anna and Schuster, Nick},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.13.9.103},
  URN =		{urn:nbn:de:0030-drops-198223},
  doi =		{10.4230/DagRep.13.9.103},
  annote =	{Keywords: Robotics, Responsibility, Trust, Fairness, Predictability, Understandability, Ethics}
}
Document
Normative Reasoning for AI (Dagstuhl Seminar 23151)

Authors: Agata Ciabattoni, John F. Horty, Marija Slavkovik, Leendert van der Torre, and Aleks Knoks

Published in: Dagstuhl Reports, Volume 13, Issue 4 (2023)


Abstract
Normative reasoning is reasoning about normative matters - such as obligations, permissions, and the rights of individuals or groups. It is prevalent in both legal and ethical discourse, and it can - and arguably should - play a crucial role in the construction of autonomous agents. We often find it important to know whether specific norms apply in a given situation, and to understand why and when they apply, and why some other norms do not apply. In most cases, our reasons for wanting to know are purely practical - we want to make the correct decision - but they can also be more theoretical - as they are when we engage in theoretical ethics. Either way, the same questions are crucial for designing autonomous agents sensitive to legal, ethical, and social norms. This Dagstuhl Seminar brought together experts in computer science, logic (including deontic logic and argumentation), philosophy, ethics, and law with the aim of finding effective ways of formalizing norms and embedding normative reasoning in AI systems. We discussed new ways of using deontic logic and argumentation to provide explanations answering normative why questions, including such questions as "Why should I do A (rather than B)?", "Why should you do A (rather than I)?", "Why do you have the right to do A despite a certain fact or a certain norm?", and "Why does one normative system forbid me to do A, while another one allows it?". We also explored the use of formal methods in combination with sub-symbolic AI (or Machine Learning) with a view towards designing autonomous agents that can follow (legal, ethical, and social) norms.

Cite as

Agata Ciabattoni, John F. Horty, Marija Slavkovik, Leendert van der Torre, and Aleks Knoks. Normative Reasoning for AI (Dagstuhl Seminar 23151). In Dagstuhl Reports, Volume 13, Issue 4, pp. 1-23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@Article{ciabattoni_et_al:DagRep.13.4.1,
  author =	{Ciabattoni, Agata and Horty, John F. and Slavkovik, Marija and van der Torre, Leendert and Knoks, Aleks},
  title =	{{Normative Reasoning for AI (Dagstuhl Seminar 23151)}},
  pages =	{1--23},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2023},
  volume =	{13},
  number =	{4},
  editor =	{Ciabattoni, Agata and Horty, John F. and Slavkovik, Marija and van der Torre, Leendert and Knoks, Aleks},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.13.4.1},
  URN =		{urn:nbn:de:0030-drops-192367},
  doi =		{10.4230/DagRep.13.4.1},
  annote =	{Keywords: deontic logic, autonomous agents, AI ethics, deontic explanations}
}
Document
Invited Paper
Automating Moral Reasoning (Invited Paper)

Authors: Marija Slavkovik

Published in: OASIcs, Volume 99, International Research School in Artificial Intelligence in Bergen (AIB 2022)


Abstract
Artificial Intelligence ethics is concerned with ensuring a nonnegative ethical impact of researching, developing, deploying and using AI systems. One way to accomplish that is to enable those AI systems to make moral decisions in ethically sensitive situations, i.e., automate moral reasoning. Machine ethics is an interdisciplinary research area that is concerned with the problem of automating moral reasoning. This tutorial presents the problem of making moral decisions and gives a general overview of how a computational agent can be constructed to make moral decisions. The tutorial is aimed for students in artificial intelligence who are interested in acquiring a starting understanding of the basic concepts and a gateway to the literature in machine ethics.

Cite as

Marija Slavkovik. Automating Moral Reasoning (Invited Paper). In International Research School in Artificial Intelligence in Bergen (AIB 2022). Open Access Series in Informatics (OASIcs), Volume 99, pp. 6:1-6:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{slavkovik:OASIcs.AIB.2022.6,
  author =	{Slavkovik, Marija},
  title =	{{Automating Moral Reasoning}},
  booktitle =	{International Research School in Artificial Intelligence in Bergen (AIB 2022)},
  pages =	{6:1--6:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-228-0},
  ISSN =	{2190-6807},
  year =	{2022},
  volume =	{99},
  editor =	{Bourgaux, Camille and Ozaki, Ana and Pe\~{n}aloza, Rafael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.AIB.2022.6},
  URN =		{urn:nbn:de:0030-drops-160043},
  doi =		{10.4230/OASIcs.AIB.2022.6},
  annote =	{Keywords: Machine ethics, artificial morality, artificial moral agents}
}
Document
Ethics and Trust: Principles, Verification and Validation (Dagstuhl Seminar 19171)

Authors: Michael Fisher, Christian List, Marija Slavkovik, and Astrid Weiss

Published in: Dagstuhl Reports, Volume 9, Issue 4 (2019)


Abstract
This report documents the programme of, and outcomes from, the Dagstuhl Seminar 19171 on "Ethics and Trust: Principles, Verification and Validation". We consider the issues of ethics and trust as crucial to the future acceptance and use of autonomous systems. The development of new classes of autonomous systems, such as medical robots, "driver-less" cars, and assistive care robots has opened up questions on how we can integrate truly autonomous systems into our society. Once a system is truly autonomous, i.e. learning from interactions, moving and manipulating the world we are living in, and making decisions by itself, we must be certain that it will act in a safe and ethical way, i.e. that it will be able to distinguish 'right' from `wrong' and make the decisions we would expect of it. In order for society to accept these new machines, we must also trust them, i.e. we must believe that they are reliable and that they are trying to assist us, especially when engaged in close human-robot interaction. The seminar focused on questions of how does trust with autonomous machines evolve, how to build a `practical' ethical and trustworthy system, and what are the societal implications. Key issues included: Change of trust and trust repair, AI systems as decision makers, complex system of norms and algorithmic bias, and potential discrepancies between expectations and capabilities of autonomous machines. This workshop was a follow-up to the 2016 Dagstuhl Seminar 16222 on Engineering Moral Agents: From Human Morality to Artificial Morality. When organizing this workshop we aimed to bring together communities of researchers from moral philosophy and from artificial intelligence and extend it with researchers from (social) robotics and human-robot interaction research.

Cite as

Michael Fisher, Christian List, Marija Slavkovik, and Astrid Weiss. Ethics and Trust: Principles, Verification and Validation (Dagstuhl Seminar 19171). In Dagstuhl Reports, Volume 9, Issue 4, pp. 59-86, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@Article{fisher_et_al:DagRep.9.4.59,
  author =	{Fisher, Michael and List, Christian and Slavkovik, Marija and Weiss, Astrid},
  title =	{{Ethics and Trust: Principles, Verification and Validation (Dagstuhl Seminar 19171)}},
  pages =	{59--86},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2019},
  volume =	{9},
  number =	{4},
  editor =	{Fisher, Michael and List, Christian and Slavkovik, Marija and Weiss, Astrid},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.9.4.59},
  URN =		{urn:nbn:de:0030-drops-113046},
  doi =		{10.4230/DagRep.9.4.59},
  annote =	{Keywords: Verification, Artificial Morality, Social Robotics, Machine Ethics, Autonomous Systems, Explain-able AI, Safety, Trust, Mathematical Philosophy, Robot Ethics, Human-Robot Interaction}
}
Document
Engineering Moral Agents -- from Human Morality to Artificial Morality (Dagstuhl Seminar 16222)

Authors: Michael Fisher, Christian List, Marija Slavkovik, and Alan Winfield

Published in: Dagstuhl Reports, Volume 6, Issue 5 (2016)


Abstract
This report documents the programme of, and outcomes from, the Dagstuhl Seminar 16222 on "Engineering Moral Agents -- from Human Morality to Artificial Morality". Artificial morality is an emerging area of research within artificial intelligence (AI), concerned with the problem of designing artificial agents that behave as moral agents, i.e. adhere to moral, legal, and social norms. Context-aware, autonomous, and intelligent systems are becoming a presence in our society and are increasingly involved in making decisions that affect our lives. While humanity has developed formal legal and informal moral and social norms to govern its own social interactions, there are no similar regulatory structures that apply to non-human agents. The seminar focused on questions of how to formalise, "quantify", qualify, validate, verify, and modify the ``ethics" of moral machines. Key issues included the following: How to build regulatory structures that address (un)ethical machine behaviour? What are the wider societal, legal, and economic implications of introducing AI machines into our society? How to develop "computational" ethics and what are the difficult challenges that need to be addressed? When organising this workshop, we aimed to bring together communities of researchers from moral philosophy and from artificial intelligence most concerned with this topic. This is a long-term endeavour, but the seminar was successful in laying the foundations and connections for accomplishing it.

Cite as

Michael Fisher, Christian List, Marija Slavkovik, and Alan Winfield. Engineering Moral Agents -- from Human Morality to Artificial Morality (Dagstuhl Seminar 16222). In Dagstuhl Reports, Volume 6, Issue 5, pp. 114-137, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Article{fisher_et_al:DagRep.6.5.114,
  author =	{Fisher, Michael and List, Christian and Slavkovik, Marija and Winfield, Alan},
  title =	{{Engineering Moral Agents -- from Human Morality to Artificial Morality (Dagstuhl Seminar 16222)}},
  pages =	{114--137},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2016},
  volume =	{6},
  number =	{5},
  editor =	{Fisher, Michael and List, Christian and Slavkovik, Marija and Winfield, Alan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.6.5.114},
  URN =		{urn:nbn:de:0030-drops-67236},
  doi =		{10.4230/DagRep.6.5.114},
  annote =	{Keywords: Artificial Morality, Machine Ethics, Computational Morality, Autonomous Systems, Intelligent Systems, Formal Ethics, Mathematical Philosophy, Robot Ethics}
}
Document
JA4AI – Judgment Aggregation for Artificial Intelligence (Dagstuhl Seminar 14202)

Authors: Franz Dietrich, Ulle Endriss, Davide Grossi, Gabriella Pigozzi, and Marija Slavkovik

Published in: Dagstuhl Reports, Volume 4, Issue 5 (2014)


Abstract
This report documents the programme and the outcomes of Dagstuhl Seminar 14202 on "Judgment Aggregation for Artificial Intelligence". Judgment aggregation is a new group decision-making theory that lies in the intersection of logic and social choice; it studies how to reach group decisions on several logically interconnected issues by aggregation of individual judgments. Until recently research in judgment aggregation was dominated by its originating context of philosophy, political science and law. Presently, however we are witnessing increasing work in judgment aggregation from researchers in computer science. Since researchers from such diverse disciplinary backgrounds working on judgment aggregation each publish within their own discipline with virtually no cross-discipline cooperation on concrete projects, it is essential that they are given an opportunity to connect to each other and become aware of the workings of the other side. This seminar has provided such an opportunity.

Cite as

Franz Dietrich, Ulle Endriss, Davide Grossi, Gabriella Pigozzi, and Marija Slavkovik. JA4AI – Judgment Aggregation for Artificial Intelligence (Dagstuhl Seminar 14202). In Dagstuhl Reports, Volume 4, Issue 5, pp. 27-39, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@Article{dietrich_et_al:DagRep.4.5.27,
  author =	{Dietrich, Franz and Endriss, Ulle and Grossi, Davide and Pigozzi, Gabriella and Slavkovik, Marija},
  title =	{{JA4AI – Judgment Aggregation for Artificial Intelligence (Dagstuhl Seminar 14202)}},
  pages =	{27--39},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2014},
  volume =	{4},
  number =	{5},
  editor =	{Dietrich, Franz and Endriss, Ulle and Grossi, Davide and Pigozzi, Gabriella and Slavkovik, Marija},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.4.5.27},
  URN =		{urn:nbn:de:0030-drops-46791},
  doi =		{10.4230/DagRep.4.5.27},
  annote =	{Keywords: Judgment Aggregation, Artificial Intelligence, Computational Social Choice, Collective Decision-making}
}
  • Refine by Author
  • 6 Slavkovik, Marija
  • 3 Fisher, Michael
  • 2 List, Christian
  • 1 Ciabattoni, Agata
  • 1 Dietrich, Franz
  • Show More...

  • Refine by Classification
  • 1 Computer systems organization → Robotic autonomy
  • 1 Computing methodologies → Artificial intelligence
  • 1 Computing methodologies → Multi-agent systems
  • 1 Computing methodologies → Philosophical/theoretical foundations of artificial intelligence
  • 1 Security and privacy → Trust frameworks
  • Show More...

  • Refine by Keyword
  • 2 Artificial Morality
  • 2 Autonomous Systems
  • 2 Machine Ethics
  • 2 Mathematical Philosophy
  • 2 Robot Ethics
  • Show More...

  • Refine by Type
  • 6 document

  • Refine by Publication Year
  • 1 2014
  • 1 2016
  • 1 2019
  • 1 2022
  • 1 2023
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail