EAM Diagrams - A Framework to Systematically Describe AI Systems for Effective AI Risk Assessment (Academic Track)

Authors Ronald Schnitzer, Andreas Hapfelmeier, Sonja Zillner



PDF
Thumbnail PDF

File

OASIcs.SAIA.2024.3.pdf
  • Filesize: 1.76 MB
  • 16 pages

Document Identifiers

Author Details

Ronald Schnitzer
  • Techincal University of Munich, Germany
  • Siemens AG, Munich, Germany
Andreas Hapfelmeier
  • Siemens AG, Munich, Germany
Sonja Zillner
  • Techincal University of Munich, Germany
  • Siemens AG, Munich, Germany

Cite As Get BibTex

Ronald Schnitzer, Andreas Hapfelmeier, and Sonja Zillner. EAM Diagrams - A Framework to Systematically Describe AI Systems for Effective AI Risk Assessment (Academic Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 3:1-3:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025) https://doi.org/10.4230/OASIcs.SAIA.2024.3

Abstract

Artificial Intelligence (AI) is a transformative technology that offers new opportunities across various applications. However, the capabilities of AI systems introduce new risks, which require the adaptation of established risk assessment procedures. A prerequisite for any effective risk assessment is a systematic description of the system under consideration, including its inner workings and application environment. Existing system description methodologies are only partially applicable to complex AI systems, as they either address only parts of the AI system, such as datasets or models, or do not consider AI-specific characteristics at all. In this paper, we present a novel framework called EAM Diagrams for the systematic description of AI systems, gathering all relevant information along the AI life cycle required to support a comprehensive risk assessment. The framework introduces diagrams on three levels, covering the AI system’s environment, functional inner workings, and the learning process of integrated Machine Learning (ML) models.

Subject Classification

ACM Subject Classification
  • Software and its engineering → System description languages
Keywords
  • AI system description
  • AI risk assessment
  • AI auditability

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Grady Booch, James Rumbaugh, and Ivar Jacobson. The unified modeling language user guide. The Addison-Wesley object technology series. Addison Wesley, 1999. Google Scholar
  2. Simon Brown. The C4 Model for Software Architecture, June 2018. URL: https://www.infoq.com/articles/C4-architecture-model/.
  3. J Eichler and D Angermeier. Modular Risk Assessment for the Development of Secure Automotive Systems, January 2015. Google Scholar
  4. European Parliament and Council of the European Union. Artificial intelligence Act, July 2024. URL: https://eur-lex.europa.eu/eli/reg/2024/1689/oj.
  5. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, and Kate Crawford. Datasheets for Datasets, December 2021. URL: https://doi.org/10.48550/arXiv.1803.09010.
  6. IEC. 31010:2012 - Risk Management - Risk assessment techniques. Google Scholar
  7. ISO. 31000: Risk Management — Guidelines, 2009. Google Scholar
  8. ISO. TR 4804:2020 Road vehicles — Safety and cybersecurity for automated driving systems — Design, verification and validation, 2020. Google Scholar
  9. ISO/IEC. GUIDE 51: Safety aspects: Guidelines for their inclusion in standards, 2014. Google Scholar
  10. ISO/IEC. 21448:2022 Road vehicles — Safety of the intended functionality, 2022. Google Scholar
  11. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. Model Cards for Model Reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 220-229, January 2019. URL: https://doi.org/10.1145/3287560.3287596.
  12. Object Management Group (OMG). OMG Systems Modeling Language (OMG SysML™), V1.0, 2007. Google Scholar
  13. Maximilian Poretschkin, Anna Schmitz, Maram Akila, Linara Adilova, Daniel Becker, Armin B Cremers, Dirk Hecker, Sebastian Houben, Julia Rosenzweig, Joachim Sicking, Elena Schulz, Angelika Voss, and Stefan Wrobel. Guideline for Trustworthy Artificial Intelligence - AI Assessment Catalog, 2023. URL: https://doi.org/10.48550/arXiv.2307.03681.
  14. Inioluwa Deborah Raji, Andrew Smart, Rebecca N. White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes. Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing, 2020. https://arxiv.org/abs/2001.00973, URL: https://doi.org/10.48550/arXiv.2001.00973.
  15. Vipula Rawte, Amit Sheth, and Amitava Das. A Survey of Hallucination in Large Foundation Models, September 2023. URL: https://doi.org/10.48550/arXiv.2309.05922.
  16. John Richards, David Piorkowski, Michael Hind, Stephanie Houde, and Aleksandra Mojsilović. A Methodology for Creating AI FactSheets, June 2020. https://arxiv.org/abs/2006.13796, URL: https://doi.org/10.48550/arXiv.2006.13796.
  17. Ronald Schnitzer, Andreas Hapfelmeier, Sven Gaube, and Sonja Zillner. AI hazard management: a framework for the systematic management of root causes for AI risks. In Mina Farmanbar, Maria Tzamtzi, Ajit Kumar Verma, and Antorweep Chakravorty, editors, Frontiers of artificial intelligence, ethics, and multidisciplinary applications, pages 359-375, Singapore, 2024. Springer Nature Singapore. URL: https://doi.org/10.1007/978-981-99-9836-4_27.
  18. Charlotte Siegmann and Markus Anderljung. The Brussels Effect and Artificial Intelligence. preprint, Politics and International Relations, October 2022. URL: https://doi.org/10.33774/apsa-2022-vxtsl.
  19. André Steimers and Moritz Schneider. Sources of Risk of AI Systems. International Journal of Environmental Research and Public Health, 19(6):3641, March 2022. URL: https://doi.org/10.3390/ijerph19063641.
  20. Stefan Studer, Thanh Binh Bui, Christian Drescher, Alexander Hanuschkin, Ludwig Winkler, Steven Peters, and Klaus-Robert Müller. Towards CRISP-ML(Q): A Machine Learning Process Model with Quality Assurance Methodology. Machine Learning and Knowledge Extraction, 3(2):392-413, April 2021. URL: https://doi.org/10.3390/make3020020.
  21. Laura Waltersdorfer, Fajar J. Ekaputra, Tomasz Miksa, and Marta Sabou. AuditMAI: Towards An Infrastructure for Continuous AI Auditing, June 2024. URL: https://doi.org/10.48550/arXiv.2406.14243.
  22. Gereon Weiss, Marc Zeller, Hannes Schönhaar, Chrstian Drabek, and Andreas Kreutz. Approach for Argumenting Safety on Basis of an Operational Design Domain. In 2024 IEEE/ACM 3rd International Conference on AI Engineering - Software Engineering for AI (CAIN), 2024. URL: https://doi.org/10.1145/3644815.3644944.
  23. Oliver Willers, Sebastian Sudholt, Shervin Raafatnia, and Stephanie Abrecht. Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks, January 2020. https://arxiv.org/abs/2001.08001, URL: https://doi.org/10.48550/arXiv.2001.08001.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail