Search Results

Documents authored by Genovesi, Sergio


Document
Practitioner Track
Introducing an AI Governance Framework in Financial Organizations. Best Practices in Implementing the EU AI Act (Practitioner Track)

Authors: Sergio Genovesi

Published in: OASIcs, Volume 126, Symposium on Scaling AI Assessments (SAIA 2024)


Abstract
To address the challenges of AI regulation and the EU AI Act’s requirements for financial organizations, we introduce an agile governance framework. This approach leverages existing organizational processes and governance structures, integrating AI-specific compliance measures without creating isolated processes and systems. This framework combines immediate measures to address urgent AI compliance cases with the development of a broader AI governance. It starts with an assessment of requirements and risks, followed by a gap analysis; after that, appropriate measures are defined and prioritized for organization-wide execution. The implementation process includes continuous monitoring, adjustments, and stakeholder feedback, facilitating adaptability to evolving AI standards. This procedure guarantees not only adherence to current regulations but also positions organizations to be well-equipped for prospective regulatory shifts and advancements in AI applications.

Cite as

Sergio Genovesi. Introducing an AI Governance Framework in Financial Organizations. Best Practices in Implementing the EU AI Act (Practitioner Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 9:1-9:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{genovesi:OASIcs.SAIA.2024.9,
  author =	{Genovesi, Sergio},
  title =	{{Introducing an AI Governance Framework in Financial Organizations. Best Practices in Implementing the EU AI Act}},
  booktitle =	{Symposium on Scaling AI Assessments (SAIA 2024)},
  pages =	{9:1--9:7},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-357-7},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{126},
  editor =	{G\"{o}rge, Rebekka and Haedecke, Elena and Poretschkin, Maximilian and Schmitz, Anna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SAIA.2024.9},
  URN =		{urn:nbn:de:0030-drops-227496},
  doi =		{10.4230/OASIcs.SAIA.2024.9},
  annote =	{Keywords: AI Governance, EU AI Act, Gap Analysis, Risk Management, AI Risk Assessment}
}
Document
Academic Track
Evaluating Dimensions of AI Transparency: A Comparative Study of Standards, Guidelines, and the EU AI Act (Academic Track)

Authors: Sergio Genovesi, Martin Haimerl, Iris Merget, Samantha Morgaine Prange, Otto Obert, Susanna Wolf, and Jens Ziehn

Published in: OASIcs, Volume 126, Symposium on Scaling AI Assessments (SAIA 2024)


Abstract
Transparency is considered a key property with respect to the implementation of trustworthy artificial intelligence (AI). It is also addressed in various documents concerned with the standardization and regulation of AI systems. However, this body of literature lacks a standardized, widely-accepted definition of transparency, which would be crucial for the implementation of upcoming legislation for AI like the AI Act of the European Union (EU). The main objective of this paper is to systematically analyze similarities and differences in the definitions and requirements for AI transparency. For this purpose, we define main criteria reflecting important dimensions of transparency. According to these criteria, we analyzed a set of relevant documents in AI standardization and regulation, and compared the outcomes. Almost all documents included requirements for transparency, including explainability as an associated concept. However, the details of the requirements differed considerably, e.g., regarding pieces of information to be provided, target audiences, or use cases with respect to the development of AI systems. Additionally, the definitions and requirements often remain vague. In summary, we demonstrate that there is a substantial need for clarification and standardization regarding a consistent implementation of AI transparency. The method presented in our paper can serve as a basis for future steps in the standardization of transparency requirements, in particular with respect to upcoming regulations like the European AI Act.

Cite as

Sergio Genovesi, Martin Haimerl, Iris Merget, Samantha Morgaine Prange, Otto Obert, Susanna Wolf, and Jens Ziehn. Evaluating Dimensions of AI Transparency: A Comparative Study of Standards, Guidelines, and the EU AI Act (Academic Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 10:1-10:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{genovesi_et_al:OASIcs.SAIA.2024.10,
  author =	{Genovesi, Sergio and Haimerl, Martin and Merget, Iris and Prange, Samantha Morgaine and Obert, Otto and Wolf, Susanna and Ziehn, Jens},
  title =	{{Evaluating Dimensions of AI Transparency: A Comparative Study of Standards, Guidelines, and the EU AI Act}},
  booktitle =	{Symposium on Scaling AI Assessments (SAIA 2024)},
  pages =	{10:1--10:17},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-357-7},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{126},
  editor =	{G\"{o}rge, Rebekka and Haedecke, Elena and Poretschkin, Maximilian and Schmitz, Anna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SAIA.2024.10},
  URN =		{urn:nbn:de:0030-drops-227509},
  doi =		{10.4230/OASIcs.SAIA.2024.10},
  annote =	{Keywords: AI, transparency, regulation}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail