Search Results

Documents authored by Varshney, Kush R.


Document
Trustworthiness and Responsibility in AI - Causality, Learning, and Verification (Dagstuhl Seminar 24121)

Authors: Vaishak Belle, Hana Chockler, Shannon Vallor, Kush R. Varshney, Joost Vennekens, and Sander Beckers

Published in: Dagstuhl Reports, Volume 14, Issue 3 (2024)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 24121 "Trustworthiness and Responsibility in AI - Causality, Learning, and Verification". How can we trust autonomous computer-based systems? Since such systems are increasingly being deployed in safety-critical environments while interoperating with humans, this question is rapidly becoming more important. This Dagstuhl Seminar addressed this question by bringing together an interdisciplinary group of researchers from Artificial Intelligence (AI), Machine Learning (ML), Robotics (ROB), hardware and software verification (VER), Software Engineering (SE), and Social Sciences (SS); who provided different and complementary perspectives on responsibility and correctness regarding the design of algorithms, interfaces, and development methodologies in AI. The purpose of the seminar was to initiate a debate around both theoretical foundations and practical methodologies for a "Trustworthiness & Responsibility in AI" framework that integrates quantifiable responsibility and verifiable correctness into all stages of the software engineering process. Such a framework will allow governance and regulatory practices to be viewed not only as rules and regulations imposed from afar, but instead as an integrative process of dialogue and discovery to understand why an autonomous system might fail and how to help designers and regulators address these through proactive governance. In particular, we considered how to reason about responsibility, blame, and causal factors affecting the trustworthiness of the system. More practically, we asked what tools we can provide to regulators, verification and validation professionals, and system designers to help them clarify the intent and content of regulations down to a machine interpretable form. While existing regulations are necessarily vague, and dependent on human interpretation, we asked: How should they now be made precise and quantifiable? What is lost in the process of quantification? How do we address factors that are qualitative in nature, and integrate such concerns in an engineering regime? In addressing these questions, the seminar benefitted from extensive discussions between AI, ML, ROB, VER, SE, and SS researchers who have experience with ethical, societal, and legal aspects of AI, complex AI systems, software engineering for AI systems, and causal analysis of counterexamples and software faults.

Cite as

Vaishak Belle, Hana Chockler, Shannon Vallor, Kush R. Varshney, Joost Vennekens, and Sander Beckers. Trustworthiness and Responsibility in AI - Causality, Learning, and Verification (Dagstuhl Seminar 24121). In Dagstuhl Reports, Volume 14, Issue 3, pp. 75-91, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{belle_et_al:DagRep.14.3.75,
  author =	{Belle, Vaishak and Chockler, Hana and Vallor, Shannon and Varshney, Kush R. and Vennekens, Joost and Beckers, Sander},
  title =	{{Trustworthiness and Responsibility in AI - Causality, Learning, and Verification (Dagstuhl Seminar 24121)}},
  pages =	{75--91},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2024},
  volume =	{14},
  number =	{3},
  editor =	{Belle, Vaishak and Chockler, Hana and Vallor, Shannon and Varshney, Kush R. and Vennekens, Joost and Beckers, Sander},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.14.3.75},
  URN =		{urn:nbn:de:0030-drops-211848},
  doi =		{10.4230/DagRep.14.3.75},
  annote =	{Keywords: responsible AI, trustworthy AI, causal machine learning, autonomous systems}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail