ACETONE: Predictable Programming Framework for ML Applications in Safety-Critical Systems

Authors Iryna De Albuquerque Silva , Thomas Carle , Adrien Gauffriau, Claire Pagetti

Thumbnail PDF


  • Filesize: 1.1 MB
  • 19 pages

Document Identifiers

Author Details

Iryna De Albuquerque Silva
  • ONERA, Toulouse, France
Thomas Carle
  • IRIT - Univ Toulouse 3 - CNRS, France
Adrien Gauffriau
  • Airbus, Toulouse, France
Claire Pagetti
  • ONERA, Toulouse, France

Cite AsGet BibTex

Iryna De Albuquerque Silva, Thomas Carle, Adrien Gauffriau, and Claire Pagetti. ACETONE: Predictable Programming Framework for ML Applications in Safety-Critical Systems. In 34th Euromicro Conference on Real-Time Systems (ECRTS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 231, pp. 3:1-3:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Machine learning applications have been gaining considerable attention in the field of safety-critical systems. Nonetheless, there is up to now no accepted development process that reaches classical safety confidence levels. This is the reason why we have developed a generic programming framework called ACETONE that is compliant with safety objectives (including traceability and WCET computation) for machine learning. More practically, the framework generates C code from a detailed description of off-line trained feed-forward deep neural networks that preserves the semantics of the original trained model and for which the WCET can be assessed with OTAWA. We have compared our results with Keras2c and uTVM with static runtime on a realistic set of benchmarks.

Subject Classification

ACM Subject Classification
  • Computer systems organization → Real-time systems
  • Software and its engineering → Software notations and tools
  • Real-time safety-critical systems
  • Worst Case Execution Time analysis
  • Artificial Neural Networks implementation


  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    PDF Downloads


  1. Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from URL:
  2. Erin Alves, Devesh Bhatt, Brendan Hall, Kevin Driscoll, Anitha Murugesan, and John Rushby. Considerations in assuring safety of increasingly autonomous systems. NASA, 2018. Google Scholar
  3. Junjie Bai, Fang Lu, Ke Zhang, et al. Onnx: Open neural network exchange., 2019.
  4. C. Ballabriga, H. Cassé, C. Rochange, and P. Sainrat. OTAWA: an Open Toolbox for Adaptive WCET Analysis (regular paper). In IFIP Workshop on Software Technologies for Future Embedded and Ubiquitous Systems (SEUS), 2010. Google Scholar
  5. Siddhartha Bhattacharyya, Darren Cofer, David Musliner, Joseph Mueller, and E. Engstrom. Certification considerations for adaptive systems. 2015 International Conference on Unmanned Aircraft Systems, ICUAS 2015, pages 270-279, July 2015. URL:
  6. Timothy Bourke, Lélio Brun, Pierre-Évariste Dagand, Xavier Leroy, Marc Pouzet, and Lionel Rieg. A formally verified compiler for lustre. In Albert Cohen and Martin T. Vechev, editors, Proceedings of the 38th Conference on Programming Language Design and Implementation (PLDI), pages 586-601, 2017. Google Scholar
  7. Tianqi Chen, Thierry Moreau, Ziheng Jiang, Haichen Shen, Eddie Q. Yan, Leyuan Wang, Yuwei Hu, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy. TVM: end-to-end optimization stack for deep learning. CoRR, abs/1802.04799, 2018. Google Scholar
  8. Sergei Chichin, Dominique Portes, Marc Blunder, and Victor Jegu. Capability to embed deep neural networks: Study on cpu processor in avionics context. In 10th European Congress Embedded Real Time Systems (ERTS 2020), 2020. Google Scholar
  9. Jean-Louis Colaço, Bruno Pagano, Cédric Pasteur, and Marc Pouzet. Scade 6: From a kahn semantics to a kahn implementation for multicore. In 2018 Forum on Specification Design Languages (FDL), pages 5-16, 2018. Google Scholar
  10. Rory Conlin, Keith Erickson, Joseph Abbate, and Egemen Kolemen. Keras2c: A library for converting keras neural networks to real-time compatible C. Eng. Appl. Artif. Intell., 100:104182, 2021. Google Scholar
  11. TVM consortium. microTVM: TVM on bare-metal, 2021. URL:
  12. Mathieu Damour, Florence De Grancey, Christophe Gabreau, Adrien Gauffriau, Jean-Brice Ginestet, Alexandre Hervieu, Thomas Huraux, Claire Pagetti, Ludovic Ponsolle, and Arthur Clavière. Towards certification of a reduced footprint acas-xu system: A hybrid ml-based solution. In 40th International Conference Computer Safety, Reliability, and Security (SAFECOMP), pages 34-48, 2021. Google Scholar
  13. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), pages 248-255, 2009. Google Scholar
  14. EUROCAE / RTCA. Do-178c, software considerations in airborne systems and equipment certification, 2011. Google Scholar
  15. Google. Protocol buffers, 2001. URL:
  16. Intel. Open vino documentation, 2018. Google Scholar
  17. Kalray. Kann platform for high-performance machine learning inference on kalray’s mppa® intelligent processor, 2021. Google Scholar
  18. Kalray. Mppa® coolidge™ processor - white paper, 2021. URL:
  19. Guy Katz, Clark W. Barrett, David L. Dill, Kyle Julian, and Mykel J. Kochenderfer. Reluplex: An efficient SMT solver for verifying deep neural networks. In Rupak Majumdar and Viktor Kuncak, editors, 29th International Conference Computer Aided Verification (CAV), pages 97-117, 2017. Google Scholar
  20. Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. Google Scholar
  21. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C. Burges, Léon Bottou, and Kilian Q. Weinberger, editors, 26th Annual Conference on Neural Information Processing Systems, pages 1106-1114, 2012. Google Scholar
  22. Chris Lattner, Mehdi Amini, Uday Bondhugula, Albert Cohen, Andy Davis, Jacques A. Pienaar, et al. MLIR: scaling compiler infrastructure for domain specific computation. In Jae W. Lee, Mary Lou Soffa, and Ayal Zaks, editors, International Symposium on Code Generation and Optimization, (CGO), pages 2-14, 2021. Google Scholar
  23. Yann LeCun, Bernhard E. Boser, John S. Denker, Donnie Henderson, Richard E. Howard, Wayne E. Hubbard, and Lawrence D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Comput., 1(4):541-551, 1989. Google Scholar
  24. Y. Liu, C. Chen, Ru Zhang, Tingting Qin, Xiang Ji, Haoxiang Lin, and Mao Yang. Enhancing the interoperability between deep learning frameworks by model conversion. Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2020. Google Scholar
  25. NVIDIA. Tensorrt documentation, 2021. Google Scholar
  26. NXP. Eiq™ ml software development environment, 2020. URL:
  27. Michael P. Owen, Adam Panken, Robert Moss, Luis Alvarez, and Charles Leeper. Acas xu: Integrated collision avoidance and detect and avoid capability for uas. In 2019 IEEE/AIAA 38th Digital Avionics Systems Conference (DASC), pages 1-10, 2019. Google Scholar
  28. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dquotesingle Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc., 2019. Google Scholar
  29. Hammond Pearce, Xin Yang, Partha S. Roop, Marc Katzef, and Torur Biskopsto Strom. Designing neural networks for real-time systems. IEEE Embedded Systems Letters, pages 1-1, 2020. Google Scholar
  30. Hugo Pompougnac, Ulysse Beaugnon, Albert Cohen, and Dumitru Potop-Butucaru. From SSA to Synchronous Concurrency and Back. Research Report RR-9380, INRIA Sophia Antipolis - Méditerranée (France), December 2020. URL:
  31. Partha Pratim Ray. A review on tinyml: State-of-the-art and prospects. Journal of King Saud University - Computer and Information Sciences, 34(4):1595-1623, 2022. Google Scholar
  32. Martin Schoeberl, Sahar Abbaspour, Benny Akesson, Neil Audsley, Raffaele Capasso, Jamie Garside, et al. T-crest: Time-predictable multi-core architecture for embedded systems. Journal of Systems Architecture, 61(9):449-471, 2015. Google Scholar
  33. Olivier Sentieys, Silviu Filip, David Briand, David Novo, Etienne Dupuis, Ian O'Connor, and Alberto Bosio. Adequatedl: Approximating deep learning accelerators. In 24th International Symposium on Design and Diagnostics of Electronic Circuits Systems (DDECS 21), 2021. Google Scholar
  34. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations (ICLR), 2015. Google Scholar
  35. Rafael Stahl. µtvm staticrt codegen, 2021. URL:
  36. The Coq Development Team. The Coq Proof Assistant Reference Manual, version 8.0 edition, 2004. URL:
  37. Texas Instruments. TCI6630K2L Multicore DSP+ARM KeyStone II System-on-Chip. Technical Report SPRS893E, Texas Instruments Incorporated, 2013. Google Scholar
  38. The Khronos NNEF Working Group. Neural Network Exchange Format, 2018. Google Scholar
  39. Reinhard Wilhelm, Jakob Engblom, Andreas Ermedahl, Niklas Holsti, Stephan Thesing, David Whalley, Guillem Bernat, Christian Ferdinand, Reinhold Heckmann, Tulika Mitra, Frank Mueller, Isabelle Puaut, Peter Puschner, Jan Staschulat, and Per Stenström. The worst-case execution-time problem - overview of methods and survey of tools. ACM Trans. Embed. Comput. Syst., 2008. Google Scholar