Evaluating Dimensions of AI Transparency: A Comparative Study of Standards, Guidelines, and the EU AI Act (Academic Track)

Authors Sergio Genovesi , Martin Haimerl , Iris Merget, Samantha Morgaine Prange, Otto Obert, Susanna Wolf, Jens Ziehn



PDF
Thumbnail PDF

File

OASIcs.SAIA.2024.10.pdf
  • Filesize: 0.55 MB
  • 17 pages

Document Identifiers

Author Details

Sergio Genovesi
  • SKAD AG, Frankfurt am Main, Germany
Martin Haimerl
  • Universität Furtwangen, Germany
Iris Merget
  • Deutsches Forschungszentrum für Künstliche Intelligenz GmbH (DFKI), Kaiserslautern, Germany
Samantha Morgaine Prange
  • Deutsches Forschungszentrum für Künstliche Intelligenz GmbH (DFKI), Kaiserslautern, Germany
Otto Obert
  • Main DigitalEthiker GmbH, Karlstadt am Main, Germany
Susanna Wolf
  • DATEV eG, Nürnberg, Germany
Jens Ziehn
  • Fraunhofer IOSB, Karlsruhe, Germany

Acknowledgements

We would like to thank our fellow members of the German Standardization Roadmap on Artificial Intelligence, supported by the Federal Ministry for Economic Affairs and Climate Action (BMWK) on the basis of a decision by the German Bundestag; the Foundations working group; the Ethics sub-working group; and especially the organizing committee of DIN, the German Institute for Standardization, and DKE, the German Commission for Electrical, Electronic & Information Technologies of DIN and VDE, for providing the basis for the activities of this working group and in particular this publication.

Cite As Get BibTex

Sergio Genovesi, Martin Haimerl, Iris Merget, Samantha Morgaine Prange, Otto Obert, Susanna Wolf, and Jens Ziehn. Evaluating Dimensions of AI Transparency: A Comparative Study of Standards, Guidelines, and the EU AI Act (Academic Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 10:1-10:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025) https://doi.org/10.4230/OASIcs.SAIA.2024.10

Abstract

Transparency is considered a key property with respect to the implementation of trustworthy artificial intelligence (AI). It is also addressed in various documents concerned with the standardization and regulation of AI systems. However, this body of literature lacks a standardized, widely-accepted definition of transparency, which would be crucial for the implementation of upcoming legislation for AI like the AI Act of the European Union (EU). The main objective of this paper is to systematically analyze similarities and differences in the definitions and requirements for AI transparency. For this purpose, we define main criteria reflecting important dimensions of transparency. According to these criteria, we analyzed a set of relevant documents in AI standardization and regulation, and compared the outcomes. Almost all documents included requirements for transparency, including explainability as an associated concept. However, the details of the requirements differed considerably, e.g., regarding pieces of information to be provided, target audiences, or use cases with respect to the development of AI systems. Additionally, the definitions and requirements often remain vague. In summary, we demonstrate that there is a substantial need for clarification and standardization regarding a consistent implementation of AI transparency. The method presented in our paper can serve as a basis for future steps in the standardization of transparency requirements, in particular with respect to upcoming regulations like the European AI Act.

Subject Classification

ACM Subject Classification
  • Computing methodologies → Artificial intelligence
Keywords
  • AI
  • transparency
  • regulation

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58:82-115, 2020. URL: https://doi.org/10.1016/J.INFFUS.2019.12.012.
  2. ASAM e.V., Advanced Data Controls Corp., Ansys Inc, AVL List GmbH, BTC Embedded Systems AG, DENSO Corporation, Deutsches Zentrum für Luft- und Raumfahrt e.V., Edge Case Research, e-SYNC Co. Ltd., FIVE, Foretellix Ltd, Fraunofer-Institut für Kognitive Systeme IKS, Hexagon Manufacturing Intelligence, iASYS Technology Solutions Pvt. Ltd, Institute of Communication and Computer Systems (ICCS), Oxfordshire County Council, RISE Research Institutes of Sweden, Robert Bosch GmbH, Siemens Digital Industries Software, SOLIZE Corporation, Technische Universität Braunschweig Institut für Regelungstechnik, and WMG University of Warwick. ASAM OpenODD: Concept Paper, October 2021. Google Scholar
  3. China Academy of Information and Communications Technology (CAICT). White Paper on Trustworthy Artificial Intelligence, 2021. Google Scholar
  4. DIN, DKE. German Standardization Roadmap on Artificial Intelligence (2nd edition), 2022. Google Scholar
  5. Filip Karlo Došilović, Mario Brčić, and Nikica Hlupić. Explainable artificial intelligence: A survey. In 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pages 0210-0215, 2018. Google Scholar
  6. European Commission / Independent High-Level Expert Group on Artificial Intelligence. Ethics Guidelines for Trustworthy AI, 2019. Google Scholar
  7. European Parliament and the Council of the European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending certain Union legislative acts (Artificial Intelligence Act), June 2024. Official Journal of the European Union L 218, 12.7.2024. ELI: URL: http://data.europa.eu/eli/reg/2024/1689/oj.
  8. Heike Felzmann, Eduard Fosch Villaronga, Christoph Lutz, and Aurelia Tamò-Larrieux. Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data & Society, 6(1):2053951719860542, 2019. URL: https://doi.org/10.1177/2053951719860542.
  9. J. Gruber. Public Finance and Public Policy, Fifth Edition. Worth Publishers / Macmillan Learning, 2016. Google Scholar
  10. Benjamin Haibe-Kains, George Alexandru Adam, Ahmed Hosny, Farnoosh Khodakarami, et al. Transparency and reproducibility in artificial intelligence. Nature, 586(7829):E14-E16, 2020. Google Scholar
  11. Institute of Electrical and Electronics Engineers (IEEE). IEEE 7000-2021: IEEE Standard Model Process for Addressing Ethical Concerns during System Design, 2021. Google Scholar
  12. ISO/IEC. ISO/IEC 22989: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology, April 2021. Google Scholar
  13. ISO/IEC. ISO/IEC 42001: Information technology — Artificial intelligence — Management system, December 2023. Google Scholar
  14. ISO/IEC. ISO/IEC DIS 12792: Information technology — Artificial intelligence — Transparency taxonomy of AI systems, July 2024. Google Scholar
  15. ISO/IEC/IEEE. ISO/IEC/IEEE 12207:2017: Systems and software engineering — Software life cycle processes, November 2017. Google Scholar
  16. National Institute of Standards and Technology (NIST). Artificial Intelligence Risk Management Framework (AI RMF 1.0), 2023. Google Scholar
  17. Organisation for Economic Co-operation and Development (OECD). Scoping the oecd ai principles, 2019. URL: https://doi.org/10.1787/d62f618a-en.
  18. Organisation for Economic Co-operation and Development (OECD). OECD Framework for the Classification of AI Systems, 2022. Google Scholar
  19. Organisation for Economic Co-operation and Development (OECD). OECD/LEGAL/0449: Recommendation of the Council on Artificial Intelligence, 2023. Google Scholar
  20. Maximilian Poretschkin, Anna Schmitz, Maram Akila, Linara Adilova, Daniel Becker, Armin B. Cremers, Dirk Hecker, Sebastian Houben, Michael Mock, Julia Rosenzweig, Joachim Sicking, Elena Schulz, Angelika Voss, and Stefan Wrobel. Guideline for Designing Trustworthy Artificial Intelligence: AI Assessment Catalog, February 2023. Google Scholar
  21. Wojciech Samek and Klaus-Robert Müller. Towards explainable artificial intelligence. Explainable AI: interpreting, explaining and visualizing deep learning, pages 5-22, 2019. URL: https://doi.org/10.1007/978-3-030-28954-6_1.
  22. VDE Verband der Elektrotechnik. VDE SPEC 90012: VCIO based description of systems for AI trustworthiness characterisation, April 2022. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail