OASIcs, Volume 126

Symposium on Scaling AI Assessments (SAIA 2024)



Thumbnail PDF

Event

SAIA 2024, September 30 to October 1, 2024, Cologne, Germany

Editors

Rebekka Görge
  • Fraunhofer IAIS, Sankt Augustin, Germany
Elena Haedecke
  • Fraunhofer IAIS, Sankt Augustin, Germany
  • University of Bonn, Bonn, Germany
Maximilian Poretschkin
  • Fraunhofer IAIS, Sankt Augustin, Germany
  • University of Bonn, Bonn, Germany
Anna Schmitz
  • Fraunhofer IAIS, Sankt Augustin, Germany

Publication Details

  • published at: 2025-01-27
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-357-7
  • DBLP: db/conf/saia/saia2024

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
OASIcs, Volume 126, SAIA 2024, Complete Volume

Authors: Rebekka Görge, Elena Haedecke, Maximilian Poretschkin, and Anna Schmitz


Abstract
OASIcs, Volume 126, SAIA 2024, Complete Volume

Cite as

Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 1-174, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Proceedings{gorge_et_al:OASIcs.SAIA.2024,
  title =	{{OASIcs, Volume 126, SAIA 2024, Complete Volume}},
  booktitle =	{Symposium on Scaling AI Assessments (SAIA 2024)},
  pages =	{1--174},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-357-7},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{126},
  editor =	{G\"{o}rge, Rebekka and Haedecke, Elena and Poretschkin, Maximilian and Schmitz, Anna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SAIA.2024},
  URN =		{urn:nbn:de:0030-drops-228081},
  doi =		{10.4230/OASIcs.SAIA.2024},
  annote =	{Keywords: OASIcs, Volume 126, SAIA 2024, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Rebekka Görge, Elena Haedecke, Maximilian Poretschkin, and Anna Schmitz


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 0:i-0:xii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gorge_et_al:OASIcs.SAIA.2024.0,
  author =	{G\"{o}rge, Rebekka and Haedecke, Elena and Poretschkin, Maximilian and Schmitz, Anna},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{Symposium on Scaling AI Assessments (SAIA 2024)},
  pages =	{0:i--0:xii},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-357-7},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{126},
  editor =	{G\"{o}rge, Rebekka and Haedecke, Elena and Poretschkin, Maximilian and Schmitz, Anna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SAIA.2024.0},
  URN =		{urn:nbn:de:0030-drops-228072},
  doi =		{10.4230/OASIcs.SAIA.2024.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Academic Track
On Assessing ML Model Robustness: A Methodological Framework (Academic Track)

Authors: Afef Awadid and Boris Robert


Abstract
Due to their uncertainty and vulnerability to adversarial attacks, machine learning (ML) models can lead to severe consequences, including the loss of human life, when embedded in safety-critical systems such as autonomous vehicles. Therefore, it is crucial to assess the empirical robustness of such models before integrating them into these systems. ML model robustness refers to the ability of an ML model to be insensitive to input perturbations and maintain its performance. Against this background, the Confiance.ai research program proposes a methodological framework for assessing the empirical robustness of ML models. The framework encompasses methodological processes (guidelines) captured in Capella models, along with a set of supporting tools. This paper aims to provide an overview of this framework and its application in an industrial setting.

Cite as

Afef Awadid and Boris Robert. On Assessing ML Model Robustness: A Methodological Framework (Academic Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 1:1-1:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{awadid_et_al:OASIcs.SAIA.2024.1,
  author =	{Awadid, Afef and Robert, Boris},
  title =	{{On Assessing ML Model Robustness: A Methodological Framework}},
  booktitle =	{Symposium on Scaling AI Assessments (SAIA 2024)},
  pages =	{1:1--1:10},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-357-7},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{126},
  editor =	{G\"{o}rge, Rebekka and Haedecke, Elena and Poretschkin, Maximilian and Schmitz, Anna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SAIA.2024.1},
  URN =		{urn:nbn:de:0030-drops-227410},
  doi =		{10.4230/OASIcs.SAIA.2024.1},
  annote =	{Keywords: ML model robustness, assessment, framework, methodological processes, tools}
}
Document
Practitioner Track
Trustworthy Generative AI for Financial Services (Practitioner Track)

Authors: Marc-André Zöller, Anastasiia Iurshina, and Ines Röder


Abstract
This work introduces GFT EnterpriseGPT, a regulatory-compliant, trustworthy generative AI (GenAI) platform tailored for the financial services sector. We discuss the unique challenges of applying GenAI in highly regulated environments. In the financial sector data privacy, ethical considerations, and regulatory compliance are paramount. Our solution addresses these challenges through multi-level safeguards, including robust guardrails, privacy-preserving techniques, and grounding mechanisms. Robust guardrails prevent unsafe inputs and outputs, and privacy-preserving techniques reduce the need for data transmission to third-party providers. In contrast, grounding mechanisms ensure the accuracy and reliability of artificial intelligence (AI) generated content. By incorporating these measures, we propose a path forward for safely harnessing the transformative potential of GenAI in finance, ensuring reliability, transparency, and adherence to ethical and regulatory standards. We demonstrate the practical application of GFT EnterpriseGPT within a large-scale financial institution, where it successfully improves operational efficiency and compliance.

Cite as

Marc-André Zöller, Anastasiia Iurshina, and Ines Röder. Trustworthy Generative AI for Financial Services (Practitioner Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 2:1-2:5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{zoller_et_al:OASIcs.SAIA.2024.2,
  author =	{Z\"{o}ller, Marc-Andr\'{e} and Iurshina, Anastasiia and R\"{o}der, Ines},
  title =	{{Trustworthy Generative AI for Financial Services}},
  booktitle =	{Symposium on Scaling AI Assessments (SAIA 2024)},
  pages =	{2:1--2:5},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-357-7},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{126},
  editor =	{G\"{o}rge, Rebekka and Haedecke, Elena and Poretschkin, Maximilian and Schmitz, Anna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SAIA.2024.2},
  URN =		{urn:nbn:de:0030-drops-227428},
  doi =		{10.4230/OASIcs.SAIA.2024.2},
  annote =	{Keywords: Generative AI, GenAI, Trustworthy AI, Finance, Guardrails, Grounding}
}
Document
Academic Track
EAM Diagrams - A Framework to Systematically Describe AI Systems for Effective AI Risk Assessment (Academic Track)

Authors: Ronald Schnitzer, Andreas Hapfelmeier, and Sonja Zillner


Abstract
Artificial Intelligence (AI) is a transformative technology that offers new opportunities across various applications. However, the capabilities of AI systems introduce new risks, which require the adaptation of established risk assessment procedures. A prerequisite for any effective risk assessment is a systematic description of the system under consideration, including its inner workings and application environment. Existing system description methodologies are only partially applicable to complex AI systems, as they either address only parts of the AI system, such as datasets or models, or do not consider AI-specific characteristics at all. In this paper, we present a novel framework called EAM Diagrams for the systematic description of AI systems, gathering all relevant information along the AI life cycle required to support a comprehensive risk assessment. The framework introduces diagrams on three levels, covering the AI system’s environment, functional inner workings, and the learning process of integrated Machine Learning (ML) models.

Cite as

Ronald Schnitzer, Andreas Hapfelmeier, and Sonja Zillner. EAM Diagrams - A Framework to Systematically Describe AI Systems for Effective AI Risk Assessment (Academic Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 3:1-3:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{schnitzer_et_al:OASIcs.SAIA.2024.3,
  author =	{Schnitzer, Ronald and Hapfelmeier, Andreas and Zillner, Sonja},
  title =	{{EAM Diagrams - A Framework to Systematically Describe AI Systems for Effective AI Risk Assessment}},
  booktitle =	{Symposium on Scaling AI Assessments (SAIA 2024)},
  pages =	{3:1--3:16},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-357-7},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{126},
  editor =	{G\"{o}rge, Rebekka and Haedecke, Elena and Poretschkin, Maximilian and Schmitz, Anna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SAIA.2024.3},
  URN =		{urn:nbn:de:0030-drops-227432},
  doi =		{10.4230/OASIcs.SAIA.2024.3},
  annote =	{Keywords: AI system description, AI risk assessment, AI auditability}
}
Document
Practitioner Track
Scaling of End-To-End Governance Risk Assessments for AI Systems (Practitioner Track)

Authors: Daniel Weimer, Andreas Gensch, and Kilian Koller


Abstract
Artificial Intelligence (AI) systems are embedded in a multifaceted environment characterized by intricate technical, legal, and organizational frameworks. To attain a comprehensive understanding of all AI-related risks, it is essential to evaluate both model-specific risks and those associated with the organizational and governance setups. We categorize these as "bottom-up risks" and "top-down risks," respectively. In this paper, we focus on the expansion and enhancement of a testing and auditing technology stack to identify and manage governance-related risks ("top-down"). These risks emerge from various dimensions, including internal development and decision-making processes, leadership structures, security setups, documentation practices, and more. For auditing governance related risk, we implement a traditional risk management framework and map it to the specifics of AI systems. Our end-to-end (from identification to monitoring) risk management kernel follows these implementation steps: - Identify - Collect - Assess - Comply - Monitor We demonstrate that scaling of such a risk auditing tool requires fundamental aspects. Those aspects include for instance a role-based approach, covering different roles in the development of complex AI systems. Ensuring compliance and secure record-keeping through audit-proof capabilities is also paramount. This ensures that the auditing technology can withstand scrutiny and maintain the integrity of records over time. Another critical aspect is the integrability of the auditing tool within existing risk management and governance infrastructures. This integration is essential to reduce the barriers for companies to comply with current regulatory requirements, such as the EU AI Act [European Parliament and the Council of the EU, 2024], and established standards like ISO 42001:2023. Ultimately, we demonstrate that this approach provides a robust technology stack for ensuring that AI systems are developed, utilized and supervised in a manner that is both compliant with regulatory standards and aligned with best practices in risk management and governance.

Cite as

Daniel Weimer, Andreas Gensch, and Kilian Koller. Scaling of End-To-End Governance Risk Assessments for AI Systems (Practitioner Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 4:1-4:5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{weimer_et_al:OASIcs.SAIA.2024.4,
  author =	{Weimer, Daniel and Gensch, Andreas and Koller, Kilian},
  title =	{{Scaling of End-To-End Governance Risk Assessments for AI Systems}},
  booktitle =	{Symposium on Scaling AI Assessments (SAIA 2024)},
  pages =	{4:1--4:5},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-357-7},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{126},
  editor =	{G\"{o}rge, Rebekka and Haedecke, Elena and Poretschkin, Maximilian and Schmitz, Anna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SAIA.2024.4},
  URN =		{urn:nbn:de:0030-drops-227443},
  doi =		{10.4230/OASIcs.SAIA.2024.4},
  annote =	{Keywords: AI Governance, Risk Management, AI Assessment}
}
Document
Practitioner Track
Risk Analysis Technique for the Evaluation of AI Technologies with Respect to Directly and Indirectly Affected Entities (Practitioner Track)

Authors: Joachim Iden, Felix Zwarg, and Bouthaina Abdou


Abstract
AI technologies are often described as being transformative to society. In fact, their impact is multifaceted, with both local and global effects which may be of a direct or indirect nature. Effects can stem from both the intended use of the technology and its unintentional side effects. Potentially affected entities include natural or juridical persons, groups of persons, as well as society as a whole, the economy and the natural environment. There are a number of different roles which characterise the relationship with a specific AI technology, including manufacturer, provider, voluntary user, involuntarily affected person, government, regulatory authority, and certification body. For each role, specific properties must be identified and evaluated for relevance, including ethics-related properties like privacy, fairness, human rights and human autonomy as well as engineering-related properties such as performance, reliability, safety and security. As for any other technology, there are identifiable lifecycle phases of the deployment of an AI technology, including specification, design, implementation, operation, maintenance and decommissioning. In this paper we will argue that all of these phases must be considered systematically in order to reveal both direct and indirect costs and effects to allow an objective judgment of a specific AI technology. In the past, costs caused by one party but incurred by another (so-called ‚externalities') have often been overlooked or deliberately obscured. Our approach is intended to help remedy this. We therefore discuss possible impact mechanisms represented by keywords such as resources, materials, energy, data, communication, transportation, employment and social interaction in order to identify possible causal paths. For the purpose of the analysis, we distinguish degrees of stakeholder involvement in order to support the identification of those causal paths which are not immediately obvious.

Cite as

Joachim Iden, Felix Zwarg, and Bouthaina Abdou. Risk Analysis Technique for the Evaluation of AI Technologies with Respect to Directly and Indirectly Affected Entities (Practitioner Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 5:1-5:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{iden_et_al:OASIcs.SAIA.2024.5,
  author =	{Iden, Joachim and Zwarg, Felix and Abdou, Bouthaina},
  title =	{{Risk Analysis Technique for the Evaluation of AI Technologies with Respect to Directly and Indirectly Affected Entities}},
  booktitle =	{Symposium on Scaling AI Assessments (SAIA 2024)},
  pages =	{5:1--5:6},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-357-7},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{126},
  editor =	{G\"{o}rge, Rebekka and Haedecke, Elena and Poretschkin, Maximilian and Schmitz, Anna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SAIA.2024.5},
  URN =		{urn:nbn:de:0030-drops-227456},
  doi =		{10.4230/OASIcs.SAIA.2024.5},
  annote =	{Keywords: AI, Risk Analysis, Risk Management, AI assessment}
}
Document
Practitioner Track
SafeAI-Kit: A Software Toolbox to Evaluate AI Systems with a Focus on Uncertainty Quantification (Practitioner Track)

Authors: Dominik Eisl, Bastian Bernhardt, Lukas Höhndorf, and Rafal Kulaga


Abstract
In the course of the practitioner track, the IABG toolbox safeAI-kit is presented with a focus on uncertainty quantification in machine learning. The safeAI-kit consists of five sub-modules that provide analyses for performance, robustness, dataset, explainability, and uncertainty. The development of these sub-modules take ongoing standardization activities into account.

Cite as

Dominik Eisl, Bastian Bernhardt, Lukas Höhndorf, and Rafal Kulaga. SafeAI-Kit: A Software Toolbox to Evaluate AI Systems with a Focus on Uncertainty Quantification (Practitioner Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 6:1-6:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{eisl_et_al:OASIcs.SAIA.2024.6,
  author =	{Eisl, Dominik and Bernhardt, Bastian and H\"{o}hndorf, Lukas and Kulaga, Rafal},
  title =	{{SafeAI-Kit: A Software Toolbox to Evaluate AI Systems with a Focus on Uncertainty Quantification}},
  booktitle =	{Symposium on Scaling AI Assessments (SAIA 2024)},
  pages =	{6:1--6:3},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-357-7},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{126},
  editor =	{G\"{o}rge, Rebekka and Haedecke, Elena and Poretschkin, Maximilian and Schmitz, Anna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SAIA.2024.6},
  URN =		{urn:nbn:de:0030-drops-227466},
  doi =		{10.4230/OASIcs.SAIA.2024.6},
  annote =	{Keywords: safeAI-kit, Evaluation of AI Systems, Uncertainty Quantification}
}
Document
Academic Track
Towards Trusted AI: A Blueprint for Ethics Assessment in Practice (Academic Track)

Authors: Christoph Tobias Wirth, Mihai Maftei, Rosa Esther Martín-Peña, and Iris Merget


Abstract
The development of AI technologies leaves place for unforeseen ethical challenges. Issues such as bias, lack of transparency and data privacy must be addressed during the design, development, and the deployment stages throughout the lifecycle of AI systems to mitigate their impact on users. Consequently, ensuring that such systems are responsibly built has become a priority for researchers and developers from both public and private sector. As a proposed solution, this paper presents a blueprint for AI ethics assessment. The blueprint provides for AI use cases an adaptable approach which is agnostic to ethics guidelines, regulatory environments, business models, and industry sectors. The blueprint offers an outcomes library of key performance indicators (KPIs) which are guided by a mapping of ethics framework measures to processes and phases defined by the blueprint. The main objectives of the blueprint are to provide an operationalizable process for the responsible development of ethical AI systems, and to enhance public trust needed for broad adoption of trusted AI solutions. In an initial pilot the blueprinted for AI ethics assessment is applied to a use case of generative AI in education.

Cite as

Christoph Tobias Wirth, Mihai Maftei, Rosa Esther Martín-Peña, and Iris Merget. Towards Trusted AI: A Blueprint for Ethics Assessment in Practice (Academic Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 7:1-7:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{wirth_et_al:OASIcs.SAIA.2024.7,
  author =	{Wirth, Christoph Tobias and Maftei, Mihai and Mart{\'\i}n-Pe\~{n}a, Rosa Esther and Merget, Iris},
  title =	{{Towards Trusted AI: A Blueprint for Ethics Assessment in Practice}},
  booktitle =	{Symposium on Scaling AI Assessments (SAIA 2024)},
  pages =	{7:1--7:19},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-357-7},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{126},
  editor =	{G\"{o}rge, Rebekka and Haedecke, Elena and Poretschkin, Maximilian and Schmitz, Anna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SAIA.2024.7},
  URN =		{urn:nbn:de:0030-drops-227478},
  doi =		{10.4230/OASIcs.SAIA.2024.7},
  annote =	{Keywords: Trusted AI, Trustworthy AI, AI Ethics Assessment Framework, AI Quality, AI Ethics, AI Ethics Assessment, AI Lifecycle, Responsible AI, Ethics-By-Design, AI Risk Management, Ethics Impact Assessment, AI Ethics KPIs, Human-Centric AI, Applied Ethics}
}
Document
Practitioner Track
AI Readiness of Standards: Bridging Traditional Norms with Modern Technologies (Practitioner Track)

Authors: Adrian Seeliger


Abstract
In an era where artificial intelligence (AI) is spreading throughout most industries, it is imperative to understand how existing regulatory frameworks, particularly technical standards, can adapt to accommodate AI technologies. This paper presents findings of an interdisciplinary research & development project aimed at evaluating the AI readiness of the German national body of standards, encompassing approximately 30,000 DIN, DIN EN, and DIN EN ISO documents. Utilizing a hybrid approach that combines human expertise with machine-assisted processes, we sought to determine whether these standards meet the conditions required for secure and purpose-specific AI implementation. Our research focused on defining AI readiness, operationalizing this concept, and evaluating the extent to which existing standards meet these criteria. AI readiness refers to whether a standard complies with the conditions necessary for ensuring that an AI system operates securely and as intended. To operationalize AI readiness, we developed explicit criteria encompassing AI-specific requirements and the contextual application of these standards. A dual approach involving thorough human analyses and the use of software automation was employed. Human experts annotated standardization documents to create high-quality training data, while machine learning methodologies were utilized to develop AI models capable of classifying the AI readiness of these documents. Three different software tools were developed, to provide a proof-of-concept for a more scalable and efficient review of the 30,000 standards. Despite certain technical and organizational challenges, the integration of both human insight and machine-led processes provided valuable and actionable results and insights for further development. Key findings address the exact choice of words and graphical representation in standardization documents, normative references, categorization of standardization documents, as well as suggestions for concrete document adaptions. The results underscore the importance of an interdisciplinary approach, combining domain-specific knowledge and advanced AI capabilities, to future-proof the intricate regulatory frameworks that underpin our industries and society.

Cite as

Adrian Seeliger. AI Readiness of Standards: Bridging Traditional Norms with Modern Technologies (Practitioner Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 8:1-8:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{seeliger:OASIcs.SAIA.2024.8,
  author =	{Seeliger, Adrian},
  title =	{{AI Readiness of Standards: Bridging Traditional Norms with Modern Technologies}},
  booktitle =	{Symposium on Scaling AI Assessments (SAIA 2024)},
  pages =	{8:1--8:6},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-357-7},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{126},
  editor =	{G\"{o}rge, Rebekka and Haedecke, Elena and Poretschkin, Maximilian and Schmitz, Anna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SAIA.2024.8},
  URN =		{urn:nbn:de:0030-drops-227486},
  doi =		{10.4230/OASIcs.SAIA.2024.8},
  annote =	{Keywords: Standardization, Norms and Standards, AI Readiness, Artificial Intelligence, Knowledge Automation}
}
Document
Practitioner Track
Introducing an AI Governance Framework in Financial Organizations. Best Practices in Implementing the EU AI Act (Practitioner Track)

Authors: Sergio Genovesi


Abstract
To address the challenges of AI regulation and the EU AI Act’s requirements for financial organizations, we introduce an agile governance framework. This approach leverages existing organizational processes and governance structures, integrating AI-specific compliance measures without creating isolated processes and systems. This framework combines immediate measures to address urgent AI compliance cases with the development of a broader AI governance. It starts with an assessment of requirements and risks, followed by a gap analysis; after that, appropriate measures are defined and prioritized for organization-wide execution. The implementation process includes continuous monitoring, adjustments, and stakeholder feedback, facilitating adaptability to evolving AI standards. This procedure guarantees not only adherence to current regulations but also positions organizations to be well-equipped for prospective regulatory shifts and advancements in AI applications.

Cite as

Sergio Genovesi. Introducing an AI Governance Framework in Financial Organizations. Best Practices in Implementing the EU AI Act (Practitioner Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 9:1-9:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{genovesi:OASIcs.SAIA.2024.9,
  author =	{Genovesi, Sergio},
  title =	{{Introducing an AI Governance Framework in Financial Organizations. Best Practices in Implementing the EU AI Act}},
  booktitle =	{Symposium on Scaling AI Assessments (SAIA 2024)},
  pages =	{9:1--9:7},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-357-7},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{126},
  editor =	{G\"{o}rge, Rebekka and Haedecke, Elena and Poretschkin, Maximilian and Schmitz, Anna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SAIA.2024.9},
  URN =		{urn:nbn:de:0030-drops-227496},
  doi =		{10.4230/OASIcs.SAIA.2024.9},
  annote =	{Keywords: AI Governance, EU AI Act, Gap Analysis, Risk Management, AI Risk Assessment}
}
Document
Academic Track
Evaluating Dimensions of AI Transparency: A Comparative Study of Standards, Guidelines, and the EU AI Act (Academic Track)

Authors: Sergio Genovesi, Martin Haimerl, Iris Merget, Samantha Morgaine Prange, Otto Obert, Susanna Wolf, and Jens Ziehn


Abstract
Transparency is considered a key property with respect to the implementation of trustworthy artificial intelligence (AI). It is also addressed in various documents concerned with the standardization and regulation of AI systems. However, this body of literature lacks a standardized, widely-accepted definition of transparency, which would be crucial for the implementation of upcoming legislation for AI like the AI Act of the European Union (EU). The main objective of this paper is to systematically analyze similarities and differences in the definitions and requirements for AI transparency. For this purpose, we define main criteria reflecting important dimensions of transparency. According to these criteria, we analyzed a set of relevant documents in AI standardization and regulation, and compared the outcomes. Almost all documents included requirements for transparency, including explainability as an associated concept. However, the details of the requirements differed considerably, e.g., regarding pieces of information to be provided, target audiences, or use cases with respect to the development of AI systems. Additionally, the definitions and requirements often remain vague. In summary, we demonstrate that there is a substantial need for clarification and standardization regarding a consistent implementation of AI transparency. The method presented in our paper can serve as a basis for future steps in the standardization of transparency requirements, in particular with respect to upcoming regulations like the European AI Act.

Cite as

Sergio Genovesi, Martin Haimerl, Iris Merget, Samantha Morgaine Prange, Otto Obert, Susanna Wolf, and Jens Ziehn. Evaluating Dimensions of AI Transparency: A Comparative Study of Standards, Guidelines, and the EU AI Act (Academic Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 10:1-10:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{genovesi_et_al:OASIcs.SAIA.2024.10,
  author =	{Genovesi, Sergio and Haimerl, Martin and Merget, Iris and Prange, Samantha Morgaine and Obert, Otto and Wolf, Susanna and Ziehn, Jens},
  title =	{{Evaluating Dimensions of AI Transparency: A Comparative Study of Standards, Guidelines, and the EU AI Act}},
  booktitle =	{Symposium on Scaling AI Assessments (SAIA 2024)},
  pages =	{10:1--10:17},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-357-7},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{126},
  editor =	{G\"{o}rge, Rebekka and Haedecke, Elena and Poretschkin, Maximilian and Schmitz, Anna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SAIA.2024.10},
  URN =		{urn:nbn:de:0030-drops-227509},
  doi =		{10.4230/OASIcs.SAIA.2024.10},
  annote =	{Keywords: AI, transparency, regulation}
}
Document
Practitioner Track
Transparency of AI Systems (Practitioner Track)

Authors: Oliver Müller, Veronika Lazar, and Matthias Heck


Abstract
Artificial Intelligence (AI) has now established itself as a tool for both private and professional use and is omnipresent. The number of available AI systems is constantly increasing and the underlying technologies are evolving rapidly. On an abstract level, most of these systems are operating in a black box manner: only the inputs to and the outputs of the system are visible from outside. Moreover, system outputs often lack explainability, which makes them difficult to verify without expert knowledge. The increasing complexity of AI systems and poor or missing information about the system make an assessment by eye as well as assessing the system’s trustworthiness difficult. The goal is to empower stakeholders in assessing the suitability of an AI system according to their needs and aims. The definition of the term transparency in the context of AI systems represents a first step in this direction. Transparency starts with the disclosure and provision of information and is embedded in the broad field of trustworthy AI systems. Within the scope of this paper, the Federal Office for Information Security (BSI) defines transparency of AI systems for different stakeholders. In order to keep pace with the technical progress and to avoid continuous renewals and adaptions of the definition to the current state of technology, this paper presents a technology-neutral and future-proof definition of transparency. Furthermore, the presented definition follows a holistic approach and thus also takes into account information about the ecosystem of an AI system. In this paper, we discuss our approach and proceeding as well as the opportunities and risks of transparent AI systems. The full version of the paper includes the connection to the transparency requirements in the EU AI Act of the European Parliament and council.

Cite as

Oliver Müller, Veronika Lazar, and Matthias Heck. Transparency of AI Systems (Practitioner Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 11:1-11:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{muller_et_al:OASIcs.SAIA.2024.11,
  author =	{M\"{u}ller, Oliver and Lazar, Veronika and Heck, Matthias},
  title =	{{Transparency of AI Systems}},
  booktitle =	{Symposium on Scaling AI Assessments (SAIA 2024)},
  pages =	{11:1--11:7},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-357-7},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{126},
  editor =	{G\"{o}rge, Rebekka and Haedecke, Elena and Poretschkin, Maximilian and Schmitz, Anna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SAIA.2024.11},
  URN =		{urn:nbn:de:0030-drops-227512},
  doi =		{10.4230/OASIcs.SAIA.2024.11},
  annote =	{Keywords: transparency, artificial intelligence, black box, information, stakeholder, AI Act}
}
Document
Academic Track
A View on Vulnerabilites: The Security Challenges of XAI (Academic Track)

Authors: Elisabeth Pachl, Fabian Langer, Thora Markert, and Jeanette Miriam Lorenz


Abstract
Modern deep learning methods have long been considered as black-boxes due to their opaque decision-making processes. Explainable Artificial Intelligence (XAI), however, has turned the tables: it provides insight into how these models work, promoting transparency that is crucial for accountability. Yet, recent developments in adversarial machine learning have highlighted vulnerabilities in XAI methods, raising concerns about security, reliability and trustworthiness, particularly in sensitive areas like healthcare and autonomous systems. Awareness of the potential risks associated with XAI is needed as its adoption increases, driven in part by the need to enhance compliance to regulations. This survey provides a holistic perspective on the security and safety landscape surrounding XAI, categorizing research on adversarial attacks against XAI and the misuse of explainability to enhance attacks on AI systems, such as evasion and privacy breaches. Our contribution includes identifying current insecurities in XAI and outlining future research directions in adversarial XAI. This work serves as an accessible foundation and outlook to recognize potential research gaps and define future directions. It identifies data modalities, such as time-series or graph data, and XAI methods that have not been extensively investigated for vulnerabilities in current research.

Cite as

Elisabeth Pachl, Fabian Langer, Thora Markert, and Jeanette Miriam Lorenz. A View on Vulnerabilites: The Security Challenges of XAI (Academic Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 12:1-12:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{pachl_et_al:OASIcs.SAIA.2024.12,
  author =	{Pachl, Elisabeth and Langer, Fabian and Markert, Thora and Lorenz, Jeanette Miriam},
  title =	{{A View on Vulnerabilites: The Security Challenges of XAI}},
  booktitle =	{Symposium on Scaling AI Assessments (SAIA 2024)},
  pages =	{12:1--12:23},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-357-7},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{126},
  editor =	{G\"{o}rge, Rebekka and Haedecke, Elena and Poretschkin, Maximilian and Schmitz, Anna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SAIA.2024.12},
  URN =		{urn:nbn:de:0030-drops-227523},
  doi =		{10.4230/OASIcs.SAIA.2024.12},
  annote =	{Keywords: Explainability, XAI, Transparency, Adversarial Machine Learning, Security, Vulnerabilities}
}
Document
Practitioner Track
AI Certification: Empirical Investigations into Possible Cul-De-Sacs and Ways Forward (Practitioner Track)

Authors: Benjamin Fresz, Danilo Brajovic, and Marco F. Huber


Abstract
In this paper, previously conducted studies regarding the development and certification of safe Artificial Intelligence (AI) systems from the practitioner’s viewpoint are summarized. Overall, both studies point towards a common theme: AI certification will mainly rely on the analysis of the processes used to create AI systems. While additional techniques such as methods from the field of eXplainable AI (XAI) and formal verification methods seem to hold a lot of promise, they can assist in creating safe AI-systems, but do not provide comprehensive solutions to the existing problems in regard to AI certification.

Cite as

Benjamin Fresz, Danilo Brajovic, and Marco F. Huber. AI Certification: Empirical Investigations into Possible Cul-De-Sacs and Ways Forward (Practitioner Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 13:1-13:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{fresz_et_al:OASIcs.SAIA.2024.13,
  author =	{Fresz, Benjamin and Brajovic, Danilo and Huber, Marco F.},
  title =	{{AI Certification: Empirical Investigations into Possible Cul-De-Sacs and Ways Forward}},
  booktitle =	{Symposium on Scaling AI Assessments (SAIA 2024)},
  pages =	{13:1--13:4},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-357-7},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{126},
  editor =	{G\"{o}rge, Rebekka and Haedecke, Elena and Poretschkin, Maximilian and Schmitz, Anna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SAIA.2024.13},
  URN =		{urn:nbn:de:0030-drops-227533},
  doi =		{10.4230/OASIcs.SAIA.2024.13},
  annote =	{Keywords: AI certification, eXplainable AI (XAI), safe AI, trustworthy AI, AI documentation}
}
Document
Practitioner Track
AI Certification: An Accreditation Perspective (Practitioner Track)

Authors: Susanne Kuch and Raoul Kirmes


Abstract
AI regulations worldwide set new requirements for AI systems, leading to thriving efforts to develop testing tools, metrics and procedures to prove their fulfillment. While such tools are still under research and development, this paper argues that the procedures to perform conformity assessment, especially certification, are largely in place. It provides an overview of how AI product certifications work based on international standards (ISO/IEC 17000 series) and what elements are missing from an accreditation perspective. The goal of this paper is to establish a common understanding of how conformity assessment in general and certification in particular work regarding AI systems.

Cite as

Susanne Kuch and Raoul Kirmes. AI Certification: An Accreditation Perspective (Practitioner Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 14:1-14:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kuch_et_al:OASIcs.SAIA.2024.14,
  author =	{Kuch, Susanne and Kirmes, Raoul},
  title =	{{AI Certification: An Accreditation Perspective}},
  booktitle =	{Symposium on Scaling AI Assessments (SAIA 2024)},
  pages =	{14:1--14:7},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-357-7},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{126},
  editor =	{G\"{o}rge, Rebekka and Haedecke, Elena and Poretschkin, Maximilian and Schmitz, Anna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SAIA.2024.14},
  URN =		{urn:nbn:de:0030-drops-227541},
  doi =		{10.4230/OASIcs.SAIA.2024.14},
  annote =	{Keywords: certification, conformity assessment, market entry, accreditation, artificial intelligence, standard}
}
Document
Academic Track
AI Assessment in Practice: Implementing a Certification Scheme for AI Trustworthiness (Academic Track)

Authors: Carmen Frischknecht-Gruber, Philipp Denzel, Monika Reif, Yann Billeter, Stefan Brunner, Oliver Forster, Frank-Peter Schilling, Joanna Weng, and Ricardo Chavarriaga


Abstract
The trustworthiness of artificial intelligence systems is crucial for their widespread adoption and for avoiding negative impacts on society and the environment. This paper focuses on implementing a comprehensive certification scheme developed through a collaborative academic-industry project. The scheme provides practical guidelines for assessing and certifying the trustworthiness of AI-based systems. The implementation of the scheme leverages aspects from Machine Learning Operations and the requirements management tool Jira to ensure continuous compliance and efficient lifecycle management. The integration of various high-level frameworks, scientific methods, and metrics supports the systematic evaluation of key aspects of trustworthiness, such as reliability, transparency, safety and security, and human oversight. These methods and metrics were tested and assessed on real-world use cases to dependably verify means of compliance with regulatory requirements and evaluate criteria and detailed objectives for each of these key aspects. Thus, this certification framework bridges the gap between ethical guidelines and practical application, ensuring the safe and effective deployment of AI technologies.

Cite as

Carmen Frischknecht-Gruber, Philipp Denzel, Monika Reif, Yann Billeter, Stefan Brunner, Oliver Forster, Frank-Peter Schilling, Joanna Weng, and Ricardo Chavarriaga. AI Assessment in Practice: Implementing a Certification Scheme for AI Trustworthiness (Academic Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 15:1-15:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{frischknechtgruber_et_al:OASIcs.SAIA.2024.15,
  author =	{Frischknecht-Gruber, Carmen and Denzel, Philipp and Reif, Monika and Billeter, Yann and Brunner, Stefan and Forster, Oliver and Schilling, Frank-Peter and Weng, Joanna and Chavarriaga, Ricardo},
  title =	{{AI Assessment in Practice: Implementing a Certification Scheme for AI Trustworthiness}},
  booktitle =	{Symposium on Scaling AI Assessments (SAIA 2024)},
  pages =	{15:1--15:18},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-357-7},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{126},
  editor =	{G\"{o}rge, Rebekka and Haedecke, Elena and Poretschkin, Maximilian and Schmitz, Anna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SAIA.2024.15},
  URN =		{urn:nbn:de:0030-drops-227554},
  doi =		{10.4230/OASIcs.SAIA.2024.15},
  annote =	{Keywords: AI Assessment, Certification Scheme, Artificial Intelligence, Trustworthiness of AI systems, AI Standards, AI Safety}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail