Analyzing the Stability of Relative Performance Differences Between Cloud and Embedded Environments

Authors Rumen Rumenov Kolev, Christopher Helpa



PDF
Thumbnail PDF

File

OASIcs.WCET.2023.8.pdf
  • Filesize: 1.18 MB
  • 12 pages

Document Identifiers

Author Details

Rumen Rumenov Kolev
  • TTTech Auto AG, Wien, Austria
  • TU Wien, Austria
Christopher Helpa
  • TTTech Auto AG, Wien, Austria

Cite AsGet BibTex

Rumen Rumenov Kolev and Christopher Helpa. Analyzing the Stability of Relative Performance Differences Between Cloud and Embedded Environments. In 21th International Workshop on Worst-Case Execution Time Analysis (WCET 2023). Open Access Series in Informatics (OASIcs), Volume 114, pp. 8:1-8:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)
https://doi.org/10.4230/OASIcs.WCET.2023.8

Abstract

There has been a shift towards the software-defined vehicle in the automotive industry in recent years. In order to enable the correct behaviour of critical as well as non-critical software functions, like those found in Autonomous Driving/Driver Assistance subsystems, extensive software testing needs to be performed. The usage of embedded hardware for these tests is either very expensive or takes a prohibitively long time in relation to the fast development cycles in the industry. To reduce development bottlenecks, test frameworks executed in cloud environments that leverage the scalability of the cloud are an essential part of the development process. However, relying on more performant cloud hardware for the majority of tests means that performance problems will only become apparent in later development phases when software is deployed to the real target. However, if the performance relation between executing in the cloud and on the embedded target can be approximated with sufficient precision, the expressiveness of the executed tests can be improved. Moreover, as a fully integrated system consists of a large number of software components that, at any given time, exhibit an unknown mix of best-/average-/worst-case behaviour, it is critical to know whether the performance relation differs depending on the inputs. In this paper, we examine the relative performance differences between a physical ARM-based chipset and a cloud-based ARM-based virtual machine, using a generic benchmark and 2 algorithms representative of typical automotive workloads, modified to generate best-/average-/worst-case behaviour in a reproducible and controlled way and assess the performance differences. We determine that the performance difference factor is between 1.8 and 3.6 for synthetic benchmarks and around 2.0-2.8 for more representative benchmarks. These results indicate that it may be possible to relate cloud to embedded performance with acceptable precision, especially when workload characterization is taken into account.

Subject Classification

ACM Subject Classification
  • Software and its engineering → Software development techniques
Keywords
  • Performance Benchmarking
  • Performance Factor Stability
  • Software Development
  • Cloud Computing
  • WCET

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Newsha Ardalani, Clint Lestourgeon, Karthikeyan Sankaralingam, and Xiaojin Zhu. Cross-architecture performance prediction (XAPP) using CPU code to predict GPU performance. In Proceedings of the 48th International Symposium on Microarchitecture, pages 725-737, 2015. Google Scholar
  2. Tao Chen, Rami Bahsoon, and Xin Yao. A survey and taxonomy of self-aware and self-adaptive cloud autoscaling systems. ACM Computing Surveys (CSUR), 51(3):1-40, 2018. Google Scholar
  3. Ampere Computing. Ampere® Altra® 64-bit multi-core processor features. https://d1o0i0v5q5lp8h.cloudfront.net/ampere/live/assets/documents/Altra_Rev_A1_DS_v1.30_20220728.pdf. Accessed: 2023-04-21.
  4. Rina Dechter and Judea Pearl. Generalized best-first search strategies and the optimality of A*. J. ACM, 32(3):505-536, July 1985. URL: https://doi.org/10.1145/3828.3830.
  5. Valgrind Developers. Cachegrind: a high-precision tracing profiler. https://valgrind.org/docs/manual/cg-manual.html/. Accessed: 2023-05-07.
  6. Christof Ebert and Capers Jones. Embedded software: Facts, figures, and future. Computer, 42(4):42-52, 2009. URL: https://doi.org/10.1109/MC.2009.118.
  7. EEMBC. CoreMark-PRO. https://github.com/eembc/coremark-pro. Accessed: 2023-03-07.
  8. Daniel J Fremont, Edward Kim, Yash Vardhan Pant, Sanjit A Seshia, Atul Acharya, Xantha Bruso, Paul Wells, Steve Lemke, Qiang Lu, and Shalin Mehta. Formal scenario-based testing of autonomous vehicles: From simulation to the real world. In 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), pages 1-8. IEEE, 2020. Google Scholar
  9. Peter E. Hart, Nils J. Nilsson, and Bertram Raphael. A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems Science and Cybernetics, 4(2):100-107, 1968. URL: https://doi.org/10.1109/TSSC.1968.300136.
  10. Texas Instruments. DRA829 Jacinto. https://www.ti.com/lit/ds/symlink/dra829v.pdf?ts=1673940212016&ref_url=https%253A%252F%252Fwww.ti.com%252Fproduct%252FDRA829V. Accessed: 2023-04-21.
  11. ISO 21448:2022 Road vehicles - Safety of the intended functionality, April 2022. Google Scholar
  12. Thomas Lundqvist and Per Stenstrom. Timing anomalies in dynamically scheduled microprocessors. In Proceedings 20th IEEE Real-Time Systems Symposium (Cat. No. 99CB37054), pages 12-21. IEEE, 1999. Google Scholar
  13. Berenice Mann. Arm Architecture – Armv8.2-A evolution and delivery. https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/arm-architecture-armv8-2-a-evolution-and-delivery. Accessed: 2023-06-18.
  14. Paul Nash. Azure virtual machines with Ampere Altra ARM-based processors - generally available. URL: https://azure.microsoft.com/en-us/blog/azure-virtual-machines-with-ampere-altra-arm-based-processors-generally-available/.
  15. OpenCV. ContourApproximationModes. https://docs.opencv.org/4.x/d3/dc0/group__imgproc__shape.html#ga4303f45752694956374734a03c54d5ff. Accessed: 2023-01-16.
  16. OpenCV. OpenCV contour detection. https://docs.opencv.org/3.4/d4/d73/tutorial_py_contours_begin.html. Accessed: 2023-04-25.
  17. OpenCV. RetrievalModes. https://docs.opencv.org/4.x/d3/dc0/group__imgproc__shape.html#ga819779b9857cc2f8601e6526a3a5bc71. Accessed: 2023-01-16.
  18. Stefan Schaefer, Bernhard Scholz, Stefan M Petters, and Gernot Heiser. Static analysis support for measurement-based WCET analysis. Editors: Timothy Bourke and Stefan M. Petters Work-in-Progress-Chair: Liu Xiang, Peking University, China, page 25, 2006. Google Scholar
  19. Yu Wang, Victor Lee, Gu-Yeon Wei, and David Brooks. Predicting new workload or CPU performance by analyzing public datasets. ACM Trans. Archit. Code Optim., 15(4), January 2019. URL: https://doi.org/10.1145/3284127.
  20. Yu Wang, Victor Lee, Gu-Yeon Wei, and David Brooks. Predicting new workload or CPU performance by analyzing public datasets. ACM Transactions on Architecture and Code Optimization (TACO), 15(4):1-21, 2019. Google Scholar
  21. Reinhard Wilhelm, Jakob Engblom, Andreas Ermedahl, Niklas Holsti, Stephan Thesing, David Whalley, Guillem Bernat, Christian Ferdinand, Reinhold Heckmann, Tulika Mitra, et al. The worst-case execution-time problem - overview of methods and survey of tools. ACM Transactions on Embedded Computing Systems (TECS), 7(3):1-53, 2008. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail