How Can We Make Trustworthy AI? (Invited Talk)

Author Mateja Jamnik



PDF
Thumbnail PDF

File

LIPIcs.FSCD.2023.2.pdf
  • Filesize: 326 kB
  • 1 pages

Document Identifiers

Author Details

Mateja Jamnik
  • Department of Computer Science and Technology, University of Cambridge, UK

Cite AsGet BibTex

Mateja Jamnik. How Can We Make Trustworthy AI? (Invited Talk). In 8th International Conference on Formal Structures for Computation and Deduction (FSCD 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 260, p. 2:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)
https://doi.org/10.4230/LIPIcs.FSCD.2023.2

Abstract

Not too long ago most headlines talked about our fear of AI. Today, AI is ubiquitous, and the conversation has moved on from whether we should use AI to how we can trust the AI systems that we use in our daily lives. In this talk I look at some key technical ingredients that help us build confidence and trust in using intelligent technology. I argue that intuitiveness, interaction, explainability and inclusion of human domain knowledge are essential in building this trust. I present some of the techniques and methods we are building for making AI systems that think and interact with humans in more intuitive and personalised ways, enabling humans to better understand the solutions produced by machines, and enabling machines to incorporate human domain knowledge in their reasoning and learning processes.

Subject Classification

ACM Subject Classification
  • Computing methodologies → Knowledge representation and reasoning
  • Computing methodologies → Machine learning
Keywords
  • AI
  • human-centric computing
  • knowledge representation
  • reasoning
  • machine learning

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail