Generalization by People and Machines (Dagstuhl Seminar 24192)

Authors Barbara Hammer, Filip Ilievski, Sascha Saralajew, Frank van Harmelen and all authors of the abstracts in this report



PDF
Thumbnail PDF

File

DagRep.14.5.1.pdf
  • Filesize: 2.1 MB
  • 11 pages

Document Identifiers

Author Details

Barbara Hammer
  • Universität Bielefeld, DE
Filip Ilievski
  • VU Amsterdam, NL
Sascha Saralajew
  • NEC Laboratories Europe - Heidelberg, DE
Frank van Harmelen
  • VU Amsterdam, NL
and all authors of the abstracts in this report

Cite AsGet BibTex

Barbara Hammer, Filip Ilievski, Sascha Saralajew, and Frank van Harmelen. Generalization by People and Machines (Dagstuhl Seminar 24192). In Dagstuhl Reports, Volume 14, Issue 5, pp. 1-11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)
https://doi.org/10.4230/DagRep.14.5.1

Abstract

Today’s AI systems are powerful to the extent that they have largely entered the mainstream and divided the world between those who believe AI will solve all our problems and those who fear that AI will be destructive for humanity. Meanwhile, trusting AI is very difficult given its lack of robustness to novel situations, consistency of its outputs, and interpretability of its reasoning process. Building trustworthy AI requires a paradigm shift from the current oversimplified practice of crafting accuracy-driven models to a human-centric design that can enhance human ability on manageable tasks, or enable humans and AIs to solve complex tasks together that are difficult for either separately. At the core of this problem is the unrivaled human generalization and abstraction ability. While today’s AI is able to provide a response to any input, its ability to transfer knowledge to novel situations is still limited by oversimplification practices, as manifested by tasks that involve pragmatics, agent goals, and understanding of narrative structures. As there are currently no venues that allow cross-disciplinary research on the topic of reliable AI generalization, this discrepancy is problematic and requires dedicated efforts to bring in one place generalization experts from different fields within AI, but also with Cognitive Science. This Dagstuhl Seminar thus provided a unique opportunity for discussing the discrepancy between human and AI generalization mechanisms and crafting a vision on how to align the two streams in a compelling and promising way that combines the strengths of both. To ensure an effective seminar, we brought together cross-disciplinary perspectives across computer and cognitive science fields. Our participants included experts in Interpretable Machine Learning, Neuro-Symbolic Reasoning, Explainable AI, Commonsense Reasoning, Case-based Reasoning, Analogy, Cognitive Science, and Human-AI Teaming. Specifically, the seminar participants focused on the following questions: How can cognitive mechanisms in people be used to inspire generalization in AI? What Machine Learning methods hold the promise to enable such reasoning mechanisms? What is the role of data and knowledge engineering for AI and human generalization? How can we design and model human-AI teams that can benefit from their complementary generalization capabilities? How can we evaluate generalization in humans and AI in a satisfactory manner?

Subject Classification

ACM Subject Classification
  • Computing methodologies → Artificial intelligence
  • Computing methodologies → Cognitive science
Keywords
  • Abstraction
  • Cognitive Science
  • Generalization
  • Human-AI Teaming
  • Interpretable Machine Learning
  • Neuro-Symbolic AI

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail