Transparency of AI Systems (Practitioner Track)

Authors Oliver Müller, Veronika Lazar, Matthias Heck



PDF
Thumbnail PDF

File

OASIcs.SAIA.2024.11.pdf
  • Filesize: 377 kB
  • 7 pages

Document Identifiers

Author Details

Oliver Müller
  • Federal Office for Information Security (BSI), Saarbrücken, Germany
Veronika Lazar
  • Federal Office for Information Security (BSI), Saarbrücken, Germany
Matthias Heck
  • Federal Office for Information Security (BSI), Saarbrücken, Germany

Acknowledgements

We would like to thank our colleagues from the Central Office for Information Technology in the Security Sector (ZITiS) as well as from the Federal Office for Information Security (BSI) for their critical comments and for proofreading the full version of the paper.

Cite As Get BibTex

Oliver Müller, Veronika Lazar, and Matthias Heck. Transparency of AI Systems (Practitioner Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 11:1-11:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025) https://doi.org/10.4230/OASIcs.SAIA.2024.11

Abstract

Artificial Intelligence (AI) has now established itself as a tool for both private and professional use and is omnipresent. The number of available AI systems is constantly increasing and the underlying technologies are evolving rapidly. On an abstract level, most of these systems are operating in a black box manner: only the inputs to and the outputs of the system are visible from outside. Moreover, system outputs often lack explainability, which makes them difficult to verify without expert knowledge. The increasing complexity of AI systems and poor or missing information about the system make an assessment by eye as well as assessing the system’s trustworthiness difficult. The goal is to empower stakeholders in assessing the suitability of an AI system according to their needs and aims. The definition of the term transparency in the context of AI systems represents a first step in this direction. Transparency starts with the disclosure and provision of information and is embedded in the broad field of trustworthy AI systems. Within the scope of this paper, the Federal Office for Information Security (BSI) defines transparency of AI systems for different stakeholders. In order to keep pace with the technical progress and to avoid continuous renewals and adaptions of the definition to the current state of technology, this paper presents a technology-neutral and future-proof definition of transparency. Furthermore, the presented definition follows a holistic approach and thus also takes into account information about the ecosystem of an AI system. In this paper, we discuss our approach and proceeding as well as the opportunities and risks of transparent AI systems. The full version of the paper includes the connection to the transparency requirements in the EU AI Act of the European Parliament and council.

Subject Classification

ACM Subject Classification
  • Computing methodologies → Artificial intelligence
Keywords
  • transparency
  • artificial intelligence
  • black box
  • information
  • stakeholder
  • AI Act

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. ISO/IEC 22989:2022. Information technology-artificial intelligence-artificial intelligence concepts and terminology, July 2022. Google Scholar
  2. BSI. Ai cloud service compliance criteria catalogue (aic4), 2021. URL: https://www.bsi.bund.de.
  3. BSI. Safe, robust and comprehensible use of ai - problems, measures and needs for action, 2021. URL: https://www.bsi.bund.de.
  4. Weixin Liang, Nazneen Rajani, Xinyu Yang, Ezinwanne Ozoani, Eric Wu, Yiqun Chen, Daniel Scott Smith, and James Zou. What’s documented in ai? systematic analysis of 32k ai model cards. CoRR, February 2024. URL: http://arxiv.org/abs/2402.05160, URL: https://doi.org/10.48550/arXiv.2402.05160.
  5. OECD. Recommendation of the council on artificial intelligence, 2019. URL: https://legalinstruments.oecd.org.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail