DagRep.13.4.1.pdf
- Filesize: 2.61 MB
- 23 pages
Normative reasoning is reasoning about normative matters - such as obligations, permissions, and the rights of individuals or groups. It is prevalent in both legal and ethical discourse, and it can - and arguably should - play a crucial role in the construction of autonomous agents. We often find it important to know whether specific norms apply in a given situation, and to understand why and when they apply, and why some other norms do not apply. In most cases, our reasons for wanting to know are purely practical - we want to make the correct decision - but they can also be more theoretical - as they are when we engage in theoretical ethics. Either way, the same questions are crucial for designing autonomous agents sensitive to legal, ethical, and social norms. This Dagstuhl Seminar brought together experts in computer science, logic (including deontic logic and argumentation), philosophy, ethics, and law with the aim of finding effective ways of formalizing norms and embedding normative reasoning in AI systems. We discussed new ways of using deontic logic and argumentation to provide explanations answering normative why questions, including such questions as "Why should I do A (rather than B)?", "Why should you do A (rather than I)?", "Why do you have the right to do A despite a certain fact or a certain norm?", and "Why does one normative system forbid me to do A, while another one allows it?". We also explored the use of formal methods in combination with sub-symbolic AI (or Machine Learning) with a view towards designing autonomous agents that can follow (legal, ethical, and social) norms.
Feedback for Dagstuhl Publishing