AI Certification: Empirical Investigations into Possible Cul-De-Sacs and Ways Forward (Practitioner Track)

Authors Benjamin Fresz , Danilo Brajovic , Marco F. Huber



PDF
Thumbnail PDF

File

OASIcs.SAIA.2024.13.pdf
  • Filesize: 383 kB
  • 4 pages

Document Identifiers

Author Details

Benjamin Fresz
  • Fraunhofer Institute for Manufacturing Engineering and Automation IPA, Stuttgart, Germany
  • Institute of Industrial Manufacturing and Management IFF, University of Stuttgart, Germany
Danilo Brajovic
  • Fraunhofer Institute for Manufacturing Engineering and Automation IPA, Stuttgart, Germany
  • Institute of Industrial Manufacturing and Management IFF, University of Stuttgart, Germany
Marco F. Huber
  • Fraunhofer Institute for Manufacturing Engineering and Automation IPA, Stuttgart, Germany
  • Institute of Industrial Manufacturing and Management IFF, University of Stuttgart, Germany

Acknowledgements

Parts of this paper were refined with the help of company-specific LLM (FhGenie 4o).

Cite As Get BibTex

Benjamin Fresz, Danilo Brajovic, and Marco F. Huber. AI Certification: Empirical Investigations into Possible Cul-De-Sacs and Ways Forward (Practitioner Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 13:1-13:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025) https://doi.org/10.4230/OASIcs.SAIA.2024.13

Abstract

In this paper, previously conducted studies regarding the development and certification of safe Artificial Intelligence (AI) systems from the practitioner’s viewpoint are summarized. Overall, both studies point towards a common theme: AI certification will mainly rely on the analysis of the processes used to create AI systems. While additional techniques such as methods from the field of eXplainable AI (XAI) and formal verification methods seem to hold a lot of promise, they can assist in creating safe AI-systems, but do not provide comprehensive solutions to the existing problems in regard to AI certification.

Subject Classification

ACM Subject Classification
  • Social and professional topics → Testing, certification and licensing
  • Computing methodologies → Machine learning
  • General and reference → Empirical studies
Keywords
  • AI certification
  • eXplainable AI (XAI)
  • safe AI
  • trustworthy AI
  • AI documentation

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance), 2024. URL: http://data.europa.eu/eli/reg/2024/1689/oj.
  2. Danilo Brajovic, Niclas Renner, Vincent Philipp Goebels, Philipp Wagner, Benjamin Fresz, Martin Biller, Mara Klaeb, Janika Kutz, Jens Neuhuettler, and Marco F. Huber. Model Reporting for Certifiable AI: A Proposal from Merging EU Regulation into AI Development, 2023. https://arxiv.org/abs/2307.11525, URL: https://doi.org/10.48550/arXiv.2307.11525.
  3. Benjamin Fresz, Vincent Philipp Göbels, Safa Omri, Danilo Brajovic, Andreas Aichele, Janika Kutz, Jens Neuhüttler, and Marco F. Huber. The Contribution of XAI for the Safe Development and Certification of AI: An Expert-Based Analysis, 2024. https://arxiv.org/abs/2408.02379, URL: https://doi.org/10.48550/arXiv.2408.02379.
  4. Yueqi Li and Sanjay Goel. Making It Possible for the Auditing of AI: A Systematic Review of AI Audits and AI Auditability. Information Systems Frontiers, 2024. URL: https://doi.org/10.1007/s10796-024-10508-8.
  5. Jakob Mökander. Auditing of AI: Legal, Ethical and Technical Approaches. Digital Society, 2(3), 2023. URL: https://doi.org/10.1007/s44206-023-00074-y.
  6. Maximilian Poretschkin, Anna Schmitz, Maram Akila, Linara Adilova, Daniel Becker, Armin B. Cremers, Dirk Hecker, Sebastian Houben, Michael Mock, Julia Rosenzweig, Joachim Sicking, Elena Schulz, Angelika Voss, and Stefan Wrobel. Guideline for Trustworthy Artificial Intelligence - AI Assessment Catalog, 2023. https://arxiv.org/abs/2307.03681, URL: https://doi.org/10.48550/arXiv.2307.03681.
  7. Joyce Zhou and Thorsten Joachims. How to explain and justify almost any decision: Potential pitfalls for accountability in ai decision-making. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT '23, pages 12-21, New York, NY, USA, 2023. Association for Computing Machinery. URL: https://doi.org/10.1145/3593013.3593972.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail