Search Results

Documents authored by Belle, Vaishak


Document
Logic and Neural Networks (Dagstuhl Seminar 25061)

Authors: Vaishak Belle, Michael Benedikt, Dana Drachsler-Cohen, Daniel Neider, and Tom Yuviler

Published in: Dagstuhl Reports, Volume 15, Issue 2 (2025)


Abstract
Logic and learning are central to Computer Science, and in particular to AI-related research. Already Alan Turing envisioned in his 1950 "Computing Machinery and Intelligence" paper a combination of statistical (ab initio) machine learning and an "unemotional" symbolic language such as logic. The combination of logic and learning has received new impetus from the spectacular success of deep learning systems. This report documents the program and the outcomes of Dagstuhl Seminar 25061 "Logic and Neural Networks". The goal of this Dagstuhl Seminar was to bring together researchers from various communities related to utilizing logical constraints in deep learning and to create bridges between them via the exchange of ideas. The seminar focused on a set of interrelated topics: enforcement of constraints on neural networks, verifying logical constraints on neural networks, training using logic to supplement traditional supervision, and explanation and approximation via logic. This Dagstuhl Seminar aimed not at studying these areas as separate components, but in exploring common techniques among them as well as connections to other communities in machine learning that share the same broad goals. The seminar format consisted of long and short talks, as well as breakout sessions. We summarize the motivations and proceedings of the seminar, and report on the abstracts of the talks and the results of the breakout sessions.

Cite as

Vaishak Belle, Michael Benedikt, Dana Drachsler-Cohen, Daniel Neider, and Tom Yuviler. Logic and Neural Networks (Dagstuhl Seminar 25061). In Dagstuhl Reports, Volume 15, Issue 2, pp. 1-20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{belle_et_al:DagRep.15.2.1,
  author =	{Belle, Vaishak and Benedikt, Michael and Drachsler-Cohen, Dana and Neider, Daniel and Yuviler, Tom},
  title =	{{Logic and Neural Networks (Dagstuhl Seminar 25061)}},
  pages =	{1--20},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2025},
  volume =	{15},
  number =	{2},
  editor =	{Belle, Vaishak and Benedikt, Michael and Drachsler-Cohen, Dana and Neider, Daniel and Yuviler, Tom},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.15.2.1},
  URN =		{urn:nbn:de:0030-drops-230939},
  doi =		{10.4230/DagRep.15.2.1},
  annote =	{Keywords: machine learning, learning theory, logic, computational complexity, databases, verification, safety}
}
Document
Trustworthiness and Responsibility in AI - Causality, Learning, and Verification (Dagstuhl Seminar 24121)

Authors: Vaishak Belle, Hana Chockler, Shannon Vallor, Kush R. Varshney, Joost Vennekens, and Sander Beckers

Published in: Dagstuhl Reports, Volume 14, Issue 3 (2024)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 24121 "Trustworthiness and Responsibility in AI - Causality, Learning, and Verification". How can we trust autonomous computer-based systems? Since such systems are increasingly being deployed in safety-critical environments while interoperating with humans, this question is rapidly becoming more important. This Dagstuhl Seminar addressed this question by bringing together an interdisciplinary group of researchers from Artificial Intelligence (AI), Machine Learning (ML), Robotics (ROB), hardware and software verification (VER), Software Engineering (SE), and Social Sciences (SS); who provided different and complementary perspectives on responsibility and correctness regarding the design of algorithms, interfaces, and development methodologies in AI. The purpose of the seminar was to initiate a debate around both theoretical foundations and practical methodologies for a "Trustworthiness & Responsibility in AI" framework that integrates quantifiable responsibility and verifiable correctness into all stages of the software engineering process. Such a framework will allow governance and regulatory practices to be viewed not only as rules and regulations imposed from afar, but instead as an integrative process of dialogue and discovery to understand why an autonomous system might fail and how to help designers and regulators address these through proactive governance. In particular, we considered how to reason about responsibility, blame, and causal factors affecting the trustworthiness of the system. More practically, we asked what tools we can provide to regulators, verification and validation professionals, and system designers to help them clarify the intent and content of regulations down to a machine interpretable form. While existing regulations are necessarily vague, and dependent on human interpretation, we asked: How should they now be made precise and quantifiable? What is lost in the process of quantification? How do we address factors that are qualitative in nature, and integrate such concerns in an engineering regime? In addressing these questions, the seminar benefitted from extensive discussions between AI, ML, ROB, VER, SE, and SS researchers who have experience with ethical, societal, and legal aspects of AI, complex AI systems, software engineering for AI systems, and causal analysis of counterexamples and software faults.

Cite as

Vaishak Belle, Hana Chockler, Shannon Vallor, Kush R. Varshney, Joost Vennekens, and Sander Beckers. Trustworthiness and Responsibility in AI - Causality, Learning, and Verification (Dagstuhl Seminar 24121). In Dagstuhl Reports, Volume 14, Issue 3, pp. 75-91, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{belle_et_al:DagRep.14.3.75,
  author =	{Belle, Vaishak and Chockler, Hana and Vallor, Shannon and Varshney, Kush R. and Vennekens, Joost and Beckers, Sander},
  title =	{{Trustworthiness and Responsibility in AI - Causality, Learning, and Verification (Dagstuhl Seminar 24121)}},
  pages =	{75--91},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2024},
  volume =	{14},
  number =	{3},
  editor =	{Belle, Vaishak and Chockler, Hana and Vallor, Shannon and Varshney, Kush R. and Vennekens, Joost and Beckers, Sander},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.14.3.75},
  URN =		{urn:nbn:de:0030-drops-211848},
  doi =		{10.4230/DagRep.14.3.75},
  annote =	{Keywords: responsible AI, trustworthy AI, causal machine learning, autonomous systems}
}
Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail