Scaling of End-To-End Governance Risk Assessments for AI Systems (Practitioner Track)

Authors Daniel Weimer, Andreas Gensch, Kilian Koller



PDF
Thumbnail PDF

File

OASIcs.SAIA.2024.4.pdf
  • Filesize: 0.52 MB
  • 5 pages

Document Identifiers

Author Details

Daniel Weimer
  • ceel.ai, Munich, Germany
Andreas Gensch
  • ceel.ai, Munich, Germany
Kilian Koller
  • ceel.ai, Munich, Germany

Cite As Get BibTex

Daniel Weimer, Andreas Gensch, and Kilian Koller. Scaling of End-To-End Governance Risk Assessments for AI Systems (Practitioner Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 4:1-4:5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025) https://doi.org/10.4230/OASIcs.SAIA.2024.4

Abstract

Artificial Intelligence (AI) systems are embedded in a multifaceted environment characterized by intricate technical, legal, and organizational frameworks. To attain a comprehensive understanding of all AI-related risks, it is essential to evaluate both model-specific risks and those associated with the organizational and governance setups. We categorize these as "bottom-up risks" and "top-down risks," respectively. In this paper, we focus on the expansion and enhancement of a testing and auditing technology stack to identify and manage governance-related risks ("top-down"). These risks emerge from various dimensions, including internal development and decision-making processes, leadership structures, security setups, documentation practices, and more. For auditing governance related risk, we implement a traditional risk management framework and map it to the specifics of AI systems. Our end-to-end (from identification to monitoring) risk management kernel follows these implementation steps:  
- Identify 
- Collect 
- Assess 
- Comply 
- Monitor  We demonstrate that scaling of such a risk auditing tool requires fundamental aspects. Those aspects include for instance a role-based approach, covering different roles in the development of complex AI systems. Ensuring compliance and secure record-keeping through audit-proof capabilities is also paramount. This ensures that the auditing technology can withstand scrutiny and maintain the integrity of records over time. Another critical aspect is the integrability of the auditing tool within existing risk management and governance infrastructures. This integration is essential to reduce the barriers for companies to comply with current regulatory requirements, such as the EU AI Act [European Parliament and the Council of the EU, 2024], and established standards like ISO 42001:2023. Ultimately, we demonstrate that this approach provides a robust technology stack for ensuring that AI systems are developed, utilized and supervised in a manner that is both compliant with regulatory standards and aligned with best practices in risk management and governance.

Subject Classification

ACM Subject Classification
  • Computer systems organization
Keywords
  • AI Governance
  • Risk Management
  • AI Assessment

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. R. Binns. Fairness in machine learning: Lessons from political philosophy. CoRR, abs/1712.03586, 2017. URL: https://arxiv.org/abs/1712.03586.
  2. International Organization for Standardization. Iso 31000:2018 risk management. Technical report, ISO, 2018. Google Scholar
  3. European Parliament and the Council of the EU. Regulation (eu) 2024/1689 of the european parliament and of the council of 13 june 2024 laying down harmonised rules on artificial intelligence and amending regulations (ec) no 300/2008, (eu) no 167/2013, (eu) no 168/2013, (eu) 2018/858, (eu) 2018/1139 and (eu) 2019/2144 and directives 2014/90/eu, (eu) 2016/797 and (eu) 2020/1828, 2024. Google Scholar
  4. M. U. Scherer. Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies. European Journal of Risk Regulation, 29(2):354-400, 2016. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail