GenE: A Benchmark Generator for WCET Analysis

Authors Peter Wägemann, Tobias Distler, Timo Hönig, Volkmar Sieh, Wolfgang Schröder-Preikschat



PDF
Thumbnail PDF

File

OASIcs.WCET.2015.33.pdf
  • Filesize: 0.5 MB
  • 11 pages

Document Identifiers

Author Details

Peter Wägemann
Tobias Distler
Timo Hönig
Volkmar Sieh
Wolfgang Schröder-Preikschat

Cite AsGet BibTex

Peter Wägemann, Tobias Distler, Timo Hönig, Volkmar Sieh, and Wolfgang Schröder-Preikschat. GenE: A Benchmark Generator for WCET Analysis. In 15th International Workshop on Worst-Case Execution Time Analysis (WCET 2015). Open Access Series in Informatics (OASIcs), Volume 47, pp. 33-43, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)
https://doi.org/10.4230/OASIcs.WCET.2015.33

Abstract

The fact that many benchmarks for evaluating worst-case execution time (WCET) analysis tools are based on real-world applications greatly increases the value of their results. However, at the same time, the complexity of these programs makes it difficult, sometimes even impossible, to obtain all corresponding flow facts (i.e., loop bounds, infeasible paths, and input values triggering the WCET), which are essential for a comprehensive evaluation. In this paper, we address this problem by presenting GenE, a benchmark generator that in addition to source code also provides the flow facts of the benchmarks created. To generate a new benchmark, the tool combines code patterns that are commonly found in real-time applications and are challenging for WCET analyzers. By keeping track of how patterns are put together, GenE is able to determine the flow facts of the resulting benchmark based on the known flow facts of the patterns used. Using this information, it is straightforward to synthesize the accurate WCET, which can then serve as a baseline for the evaluation of WCET analyzers.
Keywords
  • WCET
  • benchmark generation
  • flow facts
  • WCET Tool Challenge

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. C. Ballabriga, H. Cassé, C. Rochange, and P. Sainrat. Otawa: An open toolbox for adaptive WCET analysis. In Software Technologies for Embedded and Ubiquitous Systems, pages 35-46. Springer, 2010. Google Scholar
  2. R. Bertran, A. Buyuktosunoglu, M. S. Gupta, M. Gonzalez, and P. Bose. Systematic energy characterization of CMP/SMT processor systems via automated micro-benchmarks. In Proceedings of the 45th International Symposium on Microarchitecture, pages 199-211, 2012. Google Scholar
  3. B. Blackham, M. Liffiton, and G. Heiser. Trickle: Automated infeasible path detection using all minimal unsatisfiable subsets. In Proceedings of the 20th Real-Time and Embedded Technology and Applications Symposium, pages 169-178, 2014. Google Scholar
  4. A. Bonenfant, H. Cassé, M. De Michiel, J. Knoop, L. Kovács, and J. Zwirchmayr. FFX: A portable WCET annotation language. In Proceedings of the 20th International Conference on Real-Time and Network Systems, pages 91-100, 2012. Google Scholar
  5. A. Bonenfant, M. de Michiel, and P. Sainrat. oRange: A tool for static loop bound analysis. In Proceedings of the Workshop on Resource Analysis, 2008. Google Scholar
  6. C. Brandolese, S. Corbetta, and W. Fornaciari. Software energy estimation based on statistical characterization of intermediate compilation code. In Proceedings of the 17th International Symposium on Low Power Electronics and Design, pages 333-338, 2011. Google Scholar
  7. D.-H. Chu and J. Jaffar. Symbolic simulation on complicated loops for WCET path analysis. In Proceedings of the 9th International Conference on Embedded Software, pages 319-328, 2011. Google Scholar
  8. J. Dujmović. Automatic generation of benchmark and test workloads. In Proceedings of the 1st Joint WOSP/SIPEW International Conference on Performance Engineering, pages 263-274, 2010. Google Scholar
  9. A. Gotlieb. TCAS software verification using constraint programming. The Knowledge Engineering Review, 27(03):343-360, 2012. Google Scholar
  10. N. Grech, K. Georgiou, J. Pallister, S. Kerrison, and K. Eder. Static energy consumption analysis of LLVM IR programs. Computing Research Repository, arXiv, pages 1-12, 2014. Google Scholar
  11. J. Gustafsson, A. Betts, A. Ermedahl, and B. Lisper. The Mälardalen WCET benchmarks: Past, present and future. In Proceedings of the 10th International Workshop on Worst-Case Execution Time Analysis, pages 137-147, 2010. Google Scholar
  12. M. R. Guthaus, J. S. Ringenberg, D. Ernst, T. M. Austin, T. Mudge, and R. B. Brown. MiBench: A free, commercially representative embedded benchmark suite. In Proceedings of the International Workshop on Workload Characterization, pages 3-14, 2001. Google Scholar
  13. N. Holsti, T. Langbacka, and S. Saarinen. Using a worst-case execution time tool for real-time verification of the DEBIE software. In Proceedings of the Data Systems in Aerospace Conference, pages 1-6, 2000. Google Scholar
  14. B. Huber, D. Prokesch, and P. Puschner. Combined WCET analysis of bitcode and machine code using control-flow relation graphs. In Proceedings of the 14th Conference on Languages, Compilers and Tools for Embedded Systems, pages 163-172, 2013. Google Scholar
  15. R. Kirner, J. Knoop, A. Prantl, M. Schordan, and I. Wenzel. WCET analysis: The annotation language challenge. In Proceedings of the 7th International Workshop on Worst-Case Execution Time Analysis, pages 1-17, 2007. Google Scholar
  16. J. Knoop, L. Kovács, and J. Zwirchmayr. r-TuBound: Loop bounds for WCET analysis. In Proceedings of the International Conference on Logic for Programming, Artificial Intelligence, and Reasoning, pages 435-444, 2012. Google Scholar
  17. J. Knoop, L. Kovács, and J. Zwirchmayr. WCET squeezing: On-demand feasibility refinement for proven precise WCET-bounds. In Proceedings of the 21st International Conference on Real-Time Networks and Systems, pages 161-170, 2013. Google Scholar
  18. C. Lattner and V. Adve. LLVM: A compilation framework for lifelong program analysis &transformation. In Proceedings of the International Symposium on Code Generation and Optimization, pages 75-86, 2004. Google Scholar
  19. F. Nemer, H. Cassé, P. Sainrat, J.-P. Bahsoun, and M. De Michiel. PapaBench: A free real-time benchmark. In Proceedings of the 6th International Workshop on Worst-Case Execution Time Analysis, pages 1-6, 2006. Google Scholar
  20. J. Pallister, S. J. Hollis, and J. Bennett. BEEBS: Open benchmarks for energy measurements on embedded platforms. Computing Research Repository, arXiv, pages 1-12, 2013. Google Scholar
  21. P. Puschner, D. Prokesch, B. Huber, J. Knoop, S. Hepp, and G. Gebhard. The T-CREST approach of compiler and WCET-analysis integration. In Proceedings of the 16th International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing, pages 1-8, 2013. Google Scholar
  22. H. G. Rice. Classes of recursively enumerable sets and their decision problems. Transactions of the American Mathematical Society, pages 358-366, 1953. Google Scholar
  23. P. Wägemann, T. Distler, T. Hönig, H. Janker, R. Kapitza, and W. Schröder-Preikschat. Worst-case energy consumption analysis for energy-constrained embedded systems. In Proceedings of the 27th Euromicro Conference on Real-Time Systems, pages 1-10, 2015. Google Scholar
  24. X. Yang, Y. Chen, E. Eide, and J. Regehr. Finding and understanding bugs in C compilers. In Proceedings of the 32nd Conference on Programming Language Design and Implementation, pages 283-294, 2011. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail