Risk Analysis Technique for the Evaluation of AI Technologies with Respect to Directly and Indirectly Affected Entities (Practitioner Track)

Authors Joachim Iden, Felix Zwarg, Bouthaina Abdou



PDF
Thumbnail PDF

File

OASIcs.SAIA.2024.5.pdf
  • Filesize: 444 kB
  • 6 pages

Document Identifiers

Author Details

Joachim Iden
  • TÜV Rheinland Japan Ltd., Osaka, Japan
Felix Zwarg
  • TÜV Rheinland Industrie Service GmbH, Köln, Germany
Bouthaina Abdou
  • TÜV Rheinland Industrie Service GmbH, Köln, Germany

Cite As Get BibTex

Joachim Iden, Felix Zwarg, and Bouthaina Abdou. Risk Analysis Technique for the Evaluation of AI Technologies with Respect to Directly and Indirectly Affected Entities (Practitioner Track). In Symposium on Scaling AI Assessments (SAIA 2024). Open Access Series in Informatics (OASIcs), Volume 126, pp. 5:1-5:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025) https://doi.org/10.4230/OASIcs.SAIA.2024.5

Abstract

AI technologies are often described as being transformative to society. In fact, their impact is multifaceted, with both local and global effects which may be of a direct or indirect nature. Effects can stem from both the intended use of the technology and its unintentional side effects. Potentially affected entities include natural or juridical persons, groups of persons, as well as society as a whole, the economy and the natural environment. There are a number of different roles which characterise the relationship with a specific AI technology, including manufacturer, provider, voluntary user, involuntarily affected person, government, regulatory authority, and certification body. For each role, specific properties must be identified and evaluated for relevance, including ethics-related properties like privacy, fairness, human rights and human autonomy as well as engineering-related properties such as performance, reliability, safety and security. As for any other technology, there are identifiable lifecycle phases of the deployment of an AI technology, including specification, design, implementation, operation, maintenance and decommissioning. In this paper we will argue that all of these phases must be considered systematically in order to reveal both direct and indirect costs and effects to allow an objective judgment of a specific AI technology. In the past, costs caused by one party but incurred by another (so-called ‚externalities') have often been overlooked or deliberately obscured. Our approach is intended to help remedy this. We therefore discuss possible impact mechanisms represented by keywords such as resources, materials, energy, data, communication, transportation, employment and social interaction in order to identify possible causal paths. For the purpose of the analysis, we distinguish degrees of stakeholder involvement in order to support the identification of those causal paths which are not immediately obvious.

Subject Classification

ACM Subject Classification
  • Social and professional topics → Computing / technology policy
  • Social and professional topics → Computing and business
  • Software and its engineering → Risk management
Keywords
  • AI
  • Risk Analysis
  • Risk Management
  • AI assessment

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Luiz André Barroso, Urs Hölzle, and Parthasarathy Ranganathan. The Datacenter as a Computer: Designing Warehouse-Scale Machines. Springer International Publishing, 2019. URL: https://doi.org/10.1007/978-3-031-01761-2.
  2. Council of European Union. Council regulation (EU) no 2024/1689, 2024. URL: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L:2024:1689.
  3. P. Hitlin. Research in the crowdsourcing age: A case study, 2016. URL: http://www.pewinternet.org/2016/07/11/research-in-the-crowdsourcing-age-a-case-study.
  4. Amnesty International. Democratic republic of the congo: Industrial mining of cobalt and copper for rechargeable batteries is leading to grievous human rights abuses. accessed on 2024-09-16. URL: https://www.amnesty.org/en/latest/news/2023/09/drc-cobalt-and-copper-mining-for-batteries-leading-to-human-rights-abuses/.
  5. Chris Leong, Tim Kelly, and Robert Alexander. Incorporating epistemic uncertainty into the safety assurance of socio-technical systems. In Alex Groce and Stefan Leue, editors, Proceedings 2nd International Workshop on Causal Reasoning for Embedded and safety-critical Systems Technologies, CREST@ETAPS 2017, Uppsala, Sweden, 29th April 2017, volume 259 of EPTCS, pages 56-71, 2017. URL: https://doi.org/10.4204/EPTCS.259.7.
  6. Peng Li, Jianyi Yang, Mohammad Atiqul Islam, and Shaolei Ren. Making ai less "thirsty": Uncovering and addressing the secret water footprint of ai models. ArXiv, abs/2304.03271, 2023. URL: https://api.semanticscholar.org/CorpusID:257985349, URL: https://doi.org/10.48550/arXiv.2304.03271.
  7. Publications Office of the European Union. Charter of fundamental rights of the european union. Technical Report 12012P/TXT, European Union, Brussels, Belgium, October 2012. URL: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:12012P/TXT.
  8. European Commission Publications. Ethics guidelines for trustworthy ai. Technical report, European Commission, Brussels, Belgium, April 2019. URL: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.
  9. Nicolás C. Zanetta-Colombo, Tobias Scharnweber, Duncan A. Christie, Carlos A. Manzano, Mario Blersch, Eugenia M. Gayo, Ariel A. Muñoz, Zoë L. Fleming, and Marcus Nüsser. When another one bites the dust: Environmental impact of global copper demand on local communities in the atacama mining hotspot as registered by tree rings. Science of The Total Environment, 920:170954, 2024. URL: https://doi.org/10.1016/j.scitotenv.2024.170954.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail