STR2RTS: Refactored StreamIT Benchmarks into Statically Analyzable Parallel Benchmarks for WCET Estimation & Real-Time Scheduling

Authors Benjamin Rouxel, Isabelle Puaut



PDF
Thumbnail PDF

File

OASIcs.WCET.2017.1.pdf
  • Filesize: 0.52 MB
  • 12 pages

Document Identifiers

Author Details

Benjamin Rouxel
Isabelle Puaut

Cite AsGet BibTex

Benjamin Rouxel and Isabelle Puaut. STR2RTS: Refactored StreamIT Benchmarks into Statically Analyzable Parallel Benchmarks for WCET Estimation & Real-Time Scheduling. In 17th International Workshop on Worst-Case Execution Time Analysis (WCET 2017). Open Access Series in Informatics (OASIcs), Volume 57, pp. 1:1-1:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)
https://doi.org/10.4230/OASIcs.WCET.2017.1

Abstract

We all had quite a time to find non-proprietary architecture-independent exploitable parallel benchmarks for Worst-Case Execution Time (WCET) estimation and real-time scheduling. However, there is no consensus on a parallel benchmark suite, when compared to the single-core era and the Mälardalen benchmark suite. This document bridges part of this gap, by presenting a collection of benchmarks with the following good properties: (i) easily analyzable by static WCET estimation tools (written in structured C language, in particular neither goto nor dynamic memory allocation, containing flow information such as loop bounds); (ii) independent from any particular run-time system (MPI, OpenMP) or real-time operating system. Each benchmark is composed of the C source code of its tasks, and an XML description describing the structure of the application (tasks and amount of data exchanged between them when applicable). Each benchmark can be integrated in a full end-to-end empirical method validation protocol on multi-core architecture. This proposed collection of benchmarks is derived from the well known StreamIT [Thies et al., Comp. Constr., 2002] benchmark suite and will be integrated in the TACleBench suite [Falk et al., WCET, 2016] in a near future. All these benchmarks are available at https://gitlab.inria.fr/brouxel/STR2RTS.
Keywords
  • Parallel benchmarks
  • Tasks scheduling
  • Worst-Case Execution Time estimation

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Matthias Becker, Dakshina Dasari, Borislav Nikolic, Benny Akesson, Vincent Nélis, and Thomas Nolte. Contention-free execution of automotive applications on a clustered many-core platform. In ECRTS, 2016. Google Scholar
  2. Christian Bienia. Benchmarking Modern Multiprocessors. PhD thesis, Princeton University, January 2011. Google Scholar
  3. Greet Bilsen, Marc Engels, Rudy Lauwereins, and Jean A. Peperstraete. Cyclo-static data flow. In Acoustics, Speech, and Signal Processing, 1995. ICASSP-95., 1995 International Conference on, volume 5, pages 3255-3258. IEEE, 1995. Google Scholar
  4. Enrico Bini and Giorgio C. Buttazzo. Measuring the performance of schedulability tests. Real-Time Systems, 30(1-2):129-154, 2005. Google Scholar
  5. Hardy Damien, Rouxel Benjamin, and Isabelle Puaut. The heptane static worst-case execution time estimation tool. In 17th International Workshop on Worst-Case Execution Time Analysis (WCET 2017), volume 47 of OpenAccess Series in Informatics (OASIcs), 2017. URL: http://dx.doi.org/10.4230/OASIcs.WCET.2017.8.
  6. Yorick De Bock, Sebastian Altmeyer, Jan Broeckhove, and Peter Hellinckx. Task-set generator for schedulability analysis using the taclebench benchmark suite. In Proceedings of the Embedded Operating Systems Workshop : EWiLi 2016, pages 1-6. CEUR Workshop proceedings, October 2016. Google Scholar
  7. Debie1. URL: https://www.irit.fr/wiki/doku.php?id=wtc:benchmarks:debie1.
  8. Robert Dick. Embedded system synthesis benchmarks suite (E3S), 2010. URL: http://ziyang.eecs.umich.edu/~dickrp/e3s/.
  9. Robert P. Dick, David L. Rhodes, and Wayne Wolf. Tgff: task graphs for free. In Proceedings of the 6th international workshop on Hardware/software codesign, pages 97-101. IEEE Computer Society, 1998. Google Scholar
  10. H. Falk, S. Altmeyer, P. Hellinckx, B. Lisper, W. Puffitsch, C. Rochange, M. Schoeberl, R. Sørensen, P. Wägemann, and S. Wegener. TACLeBench: A benchmark collection to support worst-case execution time research. In Proceedings of the 16th International Workshop on Worst-Case Execution Time Analysis (WCET’16), volume 55 of OpenAccess Series in Informatics (OASIcs), pages 1-10. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2016. URL: http://dx.doi.org/10.4230/OASIcs.WCET.2016.2.
  11. Jan Gustafsson, Adam Betts, Andreas Ermedahl, and Björn Lisper. The Mälardalen WCET benchmarks - past, present and future. In Bj"orn Lisper, editor, 10th International Workshop on Worst-Case Execution Time Analysis (WCET 2010), volume 15 of OpenAccess Series in Informatics (OASIcs), pages 136-146, Brussels, Belgium, July 2010. Schloss Dagstuhl - Leibniz-Zentrum für Informatik. URL: http://dx.doi.org/10.4230/OASIcs.WCET.2010.136.
  12. Syed Muhammad Zeeshan Iqbal, Yuchen Liang, and Hakan Grahn. Parmibench-an open-source benchmark for embedded multiprocessor systems. IEEE Computer Architecture Letters, 9(2):45-48, 2010. Google Scholar
  13. C. G. Lee. UTDSP Benchmark Suite, July 2011. URL: http://www.eecg.toronto.edu/~corinna/DSP/infrastructure/UTDSP.html.
  14. Fadia Nemer, Hugues Cassé, Pascal Sainrat, Jean-Paul Bahsoun, and Marianne De Michiel. PapaBench: a Free Real-Time Benchmark. In 6th International Workshop on Worst-Case Execution Time Analysis (WCET'06), volume 4 of OpenAccess Series in Informatics (OASIcs). Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2006. URL: http://dx.doi.org/10.4230/OASIcs.WCET.2006.678.
  15. Claire Pagetti, David Saussié, Romain Gratia, Eric Noulard, and Pierre Siron. The ROSACE Case Study: From Simulink Specification to Multi/Many-Core Execution. In 20th IEEE Real-Time and Embedded Technology and Applications Symposium, RTAS 2014, Berlin, Germany, April 15-17, 2014, pages 309-318, 2014. Google Scholar
  16. Parasuite. URL: http://parasuite.inria.fr/.
  17. Louis-Noël Pouchet. Polybench: The polyhedral benchmark suite, 2012. URL: http://www.cs.ucla.edu/pouchet/software/polybench.
  18. Wolfgang Puffitsch, Eric Noulard, and Claire Pagetti. Off-line mapping of multi-rate dependent task sets to many-core platforms. Real-Time Systems, 51(5):526-565, 2015. Google Scholar
  19. Martin Schoeberl, Thomas B. Preusser, and Sascha Uhrig. The embedded java benchmark suite jembench. In Proceedings of the 8th International Workshop on Java Technologies for Real-Time and Embedded Systems, JTRES'10, pages 120-127, New York, NY, USA, 2010. ACM. URL: http://dx.doi.org/10.1145/1850771.1850789.
  20. S. Stuijk, M. C. W. Geilen, and T. Basten. SDF³: SDF For Free. In Application of Concurrency to System Design, 6th International Conference, ACSD 2006, Proceedings, pages 276-278. IEEE Computer Society Press, Los Alamitos, CA, USA, June 2006. URL: http://www.es.ele.tue.nl/sdf3, URL: http://dx.doi.org/10.1109/ACSD.2006.23.
  21. William Thies, Michal Karczmarek, and Saman Amarasinghe. Streamit: A language for streaming applications. In Compiler Construction, pages 179-196. Springer, 2002. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail