Abstract 1 Introduction 2 Methodology: Defining and Operationalizing AI Readiness 3 Human Expert AI Readiness Assessment 4 Automation Attempts: Machine Assisted AI Readiness Assessment 5 Conclusion References

AI Readiness of Standards: Bridging Traditional Norms with Modern Technologies

Adrian Seeliger ORCID Deutsches Institut für Normung e.V. (DIN), Berlin, Germany
Abstract

In an era where artificial intelligence (AI) is spreading throughout most industries, it is imperative to understand how existing regulatory frameworks, particularly technical standards, can adapt to accommodate AI technologies. This paper presents findings of an interdisciplinary research & development project aimed at evaluating the AI readiness of the German national body of standards, encompassing approximately 30,000 DIN, DIN EN, and DIN EN ISO documents. Utilizing a hybrid approach that combines human expertise with machine-assisted processes, we sought to determine whether these standards meet the conditions required for secure and purpose-specific AI implementation.

Our research focused on defining AI readiness, operationalizing this concept, and evaluating the extent to which existing standards meet these criteria. AI readiness refers to whether a standard complies with the conditions necessary for ensuring that an AI system operates securely and as intended. To operationalize AI readiness, we developed explicit criteria encompassing AI-specific requirements and the contextual application of these standards. A dual approach involving thorough human analyses and the use of software automation was employed. Human experts annotated standardization documents to create high-quality training data, while machine learning methodologies were utilized to develop AI models capable of classifying the AI readiness of these documents.

Three different software tools were developed, to provide a proof-of-concept for a more scalable and efficient review of the 30,000 standards. Despite certain technical and organizational challenges, the integration of both human insight and machine-led processes provided valuable and actionable results and insights for further development.

Key findings address the exact choice of words and graphical representation in standardization documents, normative references, categorization of standardization documents, as well as suggestions for concrete document adaptions.

The results underscore the importance of an interdisciplinary approach, combining domain-specific knowledge and advanced AI capabilities, to future-proof the intricate regulatory frameworks that underpin our industries and society.

Keywords and phrases:
Standardization, Norms and Standards, AI Readiness, Artificial Intelligence, Knowledge Automation
Category:
Practitioner Track
Copyright and License:
[Uncaptioned image] © Adrian Seeliger; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Proper nouns: People, technologies and companies International Organization for Standardization
; Computing methodologies Artificial intelligence ; Proper nouns: People, technologies and companies European Telecommunications Standards Institute
Acknowledgements:
We extend our gratitude to the Bundesministerium für Wirtschaft und Klimaschutz (BWMK). Our sincere thanks for their invaluable support and guidance throughout this project to the Fraunhofer IAIS Team “AI Safeguarding and Certification” and Team “Natural Language Understanding” for their expertise and collaboration. We also wish to thank Fraunhofer IKS, IEM, HHI, MEWIS, and INT for their significant contributions without which this project would not have been possible.
Editors:
Rebekka Görge, Elena Haedecke, Maximilian Poretschkin, and Anna Schmitz

1 Introduction

AI as an emerging technology will impact most areas of industry and society. The project „AI Readiness of Standards“ was set to find out what this means for the German national body of standards, which consists of roughly 30,000 DIN, DIN EN and DIN EN ISO documents.

The approach was a mixture of individual human assessment and a machine-assisted process.

During the 2.5 years of the project, answers to the following research questions were sought:

  • How can a complex, 100-year-old regulatory framework be adapted for AI technologies?

  • Is a comprehensive review of all 30,000 documents for AI readiness necessary or possible?

  • How crucial is the application context in adapting standards for AI?

  • How might software technologies help with this?

This paper gives an overview of the findings.

Section 2 gives an overview of definitions and previous research, setting out the fundamental principles for this project.

These principles were applied by human experts to a large number of existing standards documents, resulting in section 3, which provides a high-level overview of the challenges and results, including a brief discussion.

To be able to apply the principles to the entirety of the body of standards in an economically feasible way, automation technology has to be used. Section 4 describes the development and use of three primary software tools.

Finally, section 5 concludes the paper with the projects’ achievements and possible future challenges.

2 Methodology: Defining and Operationalizing AI Readiness

What actually is AI readiness? In current literature, there are several understandings of AI readiness, mostly revolving around organization theory or innovation adoption frameworks. AI readiness generally refers to an organization’s capacity to deploy and utilize AI technologies effectively. The concept encompasses various dimensions including technological infrastructure, organizational preparedness, and the external environment.

For instance, one study describes AI readiness as part of the broader process of AI adoption, emphasizing the need for ongoing assessment and development rather than a single preliminary evaluation [4]. It posits that AI readiness and adoption are highly interdependent and must be integrated throughout the entire adoption process to ensure successful implementation. Another work offers an AI readiness framework which assesses an organization’s capabilities in four key dimensions: technologies, activities, boundaries, and goals, providing a pragmatic tool for facilitating digital transformation within organizations [3].

[2] highlight technological, organizational, and environmental factors, while [1] define AI readiness as an organization’s capacity to implement and utilize AI using cultural aspects and necessary resources, emphasizing the importance of top management support, resource availability, and organizational infrastructure.

Regarding standardization there was no known precedence as to how a standards document that is AI ready should look like, so we defined the term ourselves as follows (summarized):

The AI readiness of a standard refers to whether it meets conditions that ensure an AI system compliant with this standard is secure and remains so according to its intended purpose. A standard is considered AI ready if it is specific enough to cover the use of AI or AI-specific requirements and measures, while being unambiguous for users.

For more information see [5].

The initial definition and the evaluation method were developed through lengthy discussions and a wide stakeholder participation process, engaging AI experts, DIN standards committees, business associations as well as the public. This generally worked well. But already at this stage there were often more questions than answers.

While later applying the evaluation method, feedback loops provided input for 20 subsequent evolutions. Validation and testing in real conditions were identified as key for success, but not trivial.

Different industry experts for the application of standardization documents in their respective field proved rare, leading to a higher level of human resources needed to generate expert knowledge.

Additionally, the lack of legal expertise in the project led to the deferral of that aspect, addressing it in a dedicated step within the evaluation method.

3 Human Expert AI Readiness Assessment

The high complexity of the task “detailed human expert analysis of approximately 1100 standardization documents” made it challenging to ensure consistency in evaluation across different experts. The evaluation of normative documents was heavily influenced by the evaluators’ background knowledge, personal biases, and their interpretation of the content e.g., resulting in a different understanding of concepts such as AI, practicability of AI applications in a specific domain. Efforts such as verifying the plausibility of classifications and resolving conflicts through expert reviews were taken. This process required an additional layer of review and revision, adding complexity to the workflow. A certain level of bias and interpretive flexibility remained.

3.1 General AI Potentials

The integration of artificial intelligence (AI) in various fields holds significant potential for improving efficiency, accuracy, and consistency. To optimize these benefits, it is crucial to avoid subjective formulations of human cognitive abilities by providing explicit and objective descriptions. This ensures a clear understanding and application of AI capabilities without human biases.

Standardizing tables and figures is essential for unifying diverse representations of knowledge. By assigning clear and distinguishable names to different formats (e.g., differentiating between a sketch and a technical drawing, or a tolerance table and a test procedure table), computers can more effectively process and differentiate this information.

Identifying the type of document (e.g., product standards, process standards) beforehand, rather than relying on text searches within the document, can significantly streamline information retrieval and processing. Additionally, simplifying cross-references to other normative documents in a digitally capturable format, including reference cascades, would greatly enhance accessibility and usability.

AI risks and quality requirements should be marked by references to horizontal standards. This not only ensures compliance with safety standards but also provides a framework for assessing and managing potential AI-related risks.

3.2 AI Potentials in Mechanical Engineering

Throughout the whole project a variety of computer vision applications have been found to be the main driver for AI relevance of standardization documents. Developing horizontal standards for visual inspections would standardize this critical, frequently referenced process, ensuring uniformity and reliability.

Similarly, creating standards for document management, automated document inspections, and the generation of test reports would streamline these activities, enhancing efficiency and accuracy.

To facilitate the safe use of AI, especially in safety-critical applications, standards should regulate language variability and restriction, making programming languages more predictable and secure. ISO/IEC TR 5469 could serve as a sector-specific AI horizontal standards, potentially including human or programmatic control instances to ensure oversight and accountability.

Further, there is a need for new standards governing AI’s use in control systems. This includes clarifying the classification and application of AI within Safety Integrity Level (SIL) and Performance Level (PL) frameworks. Adapting existing standards such as ISO 13849-1 and -2, and mapping AI performance to current SIL/PL levels, will help maintain functional safety standards. The use of AI systems as redundancies can enhance safety by providing backup options that meet or exceed the reliability of human-operated systems.

3.3 AI Potentials in Healthcare

The medical field would benefit from sector-specific AI horizontal standards for various applications. These include antibiogram analysis, the generation of test reports, and the linking and evaluation of diverse sensory data. For example, in microbiology, AI could aid in the classification of bacteria and the measurement of growth rates (cell colony count). In radiology, AI can enhance the identification of human cell types, such as distinguishing between tumor cells and tissue cells.

Additionally, existing standards, like the risk management protocol for medical devices (DIN EN ISO 14971), should be reviewed and potentially enhanced to align with the advancements in AI technology. This will ensure that AI applications in healthcare maintain the rigorous safety and quality standards required in this critical sector.

3.4 AI Potentials in Automotive

In the automotive sector, implementing clustering and introducing specific categories or adapting an A/B/C type system, as seen in the functional machine safety domain, would streamline various functions. These categories should cover interface descriptions, specifications, architecture descriptions, test standards, and procedure descriptions. Such a structured approach would enhance the clarity and efficiency of AI applications in automotive engineering.

4 Automation Attempts: Machine Assisted AI Readiness Assessment

The national body of standards has traditionally been dominated by technical drawings, specifications, and historically developed document families. However, in contemporary applications, there has been a significant shift towards digitization, with increasing demands for “smart” standards, automated integration of standard contents, and even automated conformity checks.

Given the vast number of approximately 30 000 standards, a meticulous manual review is deemed impractical. Consequently, this project explored the utilization of software for the large-scale and scalable analysis of these standards.

Specifically, the research focused on the conceptualization, development, and investigation of three primary tools: a tool for semantic similarity search, a tool for linking standardization documents to a knowledge database, and an AI tool designed for the automated classification of standards into AI readiness classes.

4.1 Semantic Similarity Search

The Semantic Similarity Search tool was developed to find text similarities. It leverages the representational capabilities of transformer encoder models, such as BERT, to generate meaningful vector representations of documents. These vector representations capture the semantic essence of the input text, facilitating the identification of semantically similar paragraphs, sentences, or whole documents. The goal of this tool was to ease (manual) classification of documents by finding similar documents to already classified ones. Due to delays in implementation, Semantic Similarity Search was not extensively used in the project.

4.2 Knowledge Database Tool

The Knowledge Database Tool was created to link the entire body of standards to the existing open source knowledge database OpenAlex. Comparing the standards’ title and short description to the structured data of 200 million publications enabled calculating a specific AI-distance-score for each standards document. A low AI-distance score could signal a close relation of that document to AI technologies and thus a higher level of AI relevance with possible implications for AI readiness. Furthermore, the citation network of standards citing other standards was analyzed, resulting in similar AI distance score of standardization documents with mutual citations.

4.3 AI Tool

The AI tool is designed to automate the classification of AI readiness of standardization documents. It consists of three components:

The annotation component enabled the labelling of standardization documents by human experts, producing the training data necessary for developing the classification mechanisms of the AI tool.

The AI model is the core classification module, developed using advanced machine learning methodologies and utilizing the curated training data of the annotation component to categorize the AI readiness of standardization documents.

The User Interface (UI) offers an intuitive platform to showcase the AI model’s results.

Technical & Organizational Challenges

  • Issues such as the inability to annotate graphical elements (like graphs, schematics, and tables) due to technical restrictions in the labelling tool posed significant challenges.

  • The AI model could not be fully trained with the evaluating experts external knowledge. This includes information from referenced standards, application context and implicit knowledge.

  • To ensure copyright protection and information security, measures such as the use of external servers, specialized VPN access were implemented. These measures, combined with the collaboration of multiple parties, extended the initialization phase and complicated the resolution of issues like data exchange and server failures, making the process more laborious and time-consuming.

  • Watermarks in PDF documents caused difficulties as they could be on a different layer than the normal text. While labelling, the tool often couldn’t distinguish between these layers, leading to the unintended annotation of entire pages. Watermarks were required for information security reasons.

  • The labelling tool used was robust but had high memory requirements which sometimes led to technical issues such as latency and synchronization problems. The workflow had to be adapted to accommodate the tool’s limitations and the extensive memory needed led to occasional disruptions.

  • Users reported that selecting a label and marking the corresponding text passage appeared smooth initially. However, changing the label selection to create a new annotation often led to a lag. This delay sometimes resulted in incorrect label selection and subsequent inaccurate annotations.

  • The annotation component struggled to handle text that spanned multiple pages.

  • The PDF import feature occasionally required manual parsing for correct uploading. If there were problems with PDF uploads, TXT formats were used instead. This involved extracting text from PDF files using either an XML parser or a PDF parser, with a preference for XML due to its structured nature and higher accuracy.

5 Conclusion

The project aimed to develop a proof-of-concept and provide initial concrete indications and points of connection for AI in established standards. Both aspects were clearly answered positively and supported with examples and results.

However, as is often the case, complex issues require complex solutions. A systematic integration of AI into the standards framework requires deepening the developed content and building competencies. Software and data-driven approaches can help, but they also come with non-negligible additional organizational and technical efforts. We are curious to see how the advancing digitalization of our society will impact standardization and look forward to future challenges.

References

  • [1] Wajid Ali and Abdul Zahid Khan. Factors influencing readiness for artificial intelligence: a systematic literature review. Data Science and Management, 2024.
  • [2] Sulaiman Alsheibani, Yen Cheung, and Chris Messom. Artificial intelligence adoption: Ai-readiness at firm-level. In Pacific Asia Conference on Information Systems 2018, page 37. Association for Information Systems, 2018. URL: https://aisel.aisnet.org/pacis2018/37.
  • [3] Jonny Holmström. From ai to digital transformation: The ai readiness framework. Business Horizons, 65(3):329–339, 2022. doi:10.1016/j.bushor.2021.03.006.
  • [4] Jan Jöhnk, Malte Weißert, and Katrin Wyrtki. Ready or not, ai comes—an interview study of organizational ai readiness factors. Business & Information Systems Engineering, 63(1):5–20, 2021. doi:10.1007/S12599-020-00676-7.
  • [5] Kostina Prifti, Esra Demir, Julia Krämer, Klaus Heine, and Evert Stamhuis, editors. Digital Governance: Confronting the Challenges Posed by Artificial Intelligence. Information Technology and Law Series. T.M.C. Asser Press The Hague, 1 edition, 2024. Hardcover due: 08 January 2025, Softcover due: 08 January 2026, eBook due: 08 January 2025.