COLA-Gen: Active Learning Techniques for Automatic Code Generation of Benchmarks

Authors Maksim Berezov, Corinne Ancourt, Justyna Zawalska, Maryna Savchenko



PDF
Thumbnail PDF

File

OASIcs.PARMA-DITAM.2022.3.pdf
  • Filesize: 0.76 MB
  • 14 pages

Document Identifiers

Author Details

Maksim Berezov
  • Mines Paris, PSL University, France
Corinne Ancourt
  • Mines Paris, PSL University, France
Justyna Zawalska
  • Mines Paris, PSL University, France
Maryna Savchenko
  • Mines Paris, PSL University, France

Acknowledgements

We want to thank Patryk Kiepas for productive discussion and ideas that helped this research to be finished.

Cite As Get BibTex

Maksim Berezov, Corinne Ancourt, Justyna Zawalska, and Maryna Savchenko. COLA-Gen: Active Learning Techniques for Automatic Code Generation of Benchmarks. In 13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022). Open Access Series in Informatics (OASIcs), Volume 100, pp. 3:1-3:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022) https://doi.org/10.4230/OASIcs.PARMA-DITAM.2022.3

Abstract

Benchmarking is crucial in code optimization. It is required to have a set of programs that we consider representative to validate optimization techniques or evaluate predictive performance models. However, there is a shortage of available benchmarks for code optimization, more pronounced when using machine learning techniques. The problem lies in the number of programs for testing because these techniques are sensitive to the quality and quantity of data used for training.
Our work aims to address these limitations. We present a methodology to efficiently generate benchmarks for the code optimization domain. It includes an automatic code generator, an associated DSL handling, the high-level specification of the desired code, and a smart strategy for extending the benchmark as needed.
The strategy is based on Active Learning techniques and helps to generate the most representative data for our benchmark. We observed that Machine Learning models trained on our benchmark produce better quality predictions and converge faster. The optimization based on the Active Learning method achieved up to 15% more speed-up than the passive learning method using the same amount of data.

Subject Classification

ACM Subject Classification
  • Software and its engineering → Source code generation
  • Computing methodologies → Active learning settings
Keywords
  • Benchmarking
  • Code Optimization
  • Active Learning
  • DSL
  • Synthetic code generation
  • Machine Learning

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Dana Angluin. Queries revisited. In International Conference on Algorithmic Learning Theory, pages 12-31. Springer, 2001. Google Scholar
  2. J Bennett, P Dabbelt, C Garlati, GS Madhusudan, T Mudge, and D Patterson. Embench: An evolving benchmark suite for embedded iot computers from an academic-industrial cooperative. Google Scholar
  3. Zhi Chen, Zhangxiaowen Gong, Justin Josef Szaday, David C Wong, David Padua, Alexandru Nicolau, Alexander V Veidenbaum, Neftali Watkinson, Zehra Sura, Saeed Maleki, et al. Lore: A loop repository for the evaluation of compilers. In Intern. Symp. on Workload Characterization (IISWC), pages 219-228. IEEE, 2017. Google Scholar
  4. Alton Chiu, Joseph Garvey, and Tarek S Abdelrahman. Genesis: a language for generating synthetic training programs for machine learning. In 12th ACM Intern. Conf. on Computing Frontiers, pages 1-8, 2015. Google Scholar
  5. David Cohn, Les Atlas, and Richard Ladner. Improving generalization with active learning. Machine learning, 15(2):201-221, 1994. Google Scholar
  6. Chris Cummins, Pavlos Petoumenos, Zheng Wang, and Hugh Leather. Synthesizing benchmarks for predictive modeling. In Intern. Symp. on Code Generation and Optimization (CGO), pages 86-99. IEEE, 2017. Google Scholar
  7. Etem Deniz and Alper Sen. Minime-gpu: Multicore benchmark synthesizer for gpus. ACM Transactions on Architecture and Code Optimization (TACO), 12(4):1-25, 2015. Google Scholar
  8. Jaroslav Fowkes and Charles Sutton. Parameter-free probabilistic api mining across github. In 24th ACM SIGSOFT intern. Symp. on foundations of software engineering, pages 254-265, 2016. Google Scholar
  9. Georgios Gousios and Diomidis Spinellis. Mining software engineering data from github. In 39th Intern. Conf. on Software Engineering Companion (ICSE-C), pages 501-502. IEEE, 2017. Google Scholar
  10. Matthew R Guthaus, Jeffrey S Ringenberg, Dan Ernst, Todd M Austin, Trevor Mudge, and Richard B Brown. Mibench: A free, commercially representative embedded benchmark suite. In 4th Intern. workshop on workload characterization. WWC-4 (Cat. No. 01EX538), pages 3-14. IEEE, 2001. Google Scholar
  11. Eirini Kalliamvakou, Georgios Gousios, Kelly Blincoe, Leif Singer, Daniel M German, and Daniela Damian. The promises and perils of mining github. In 11th working conf. on mining software repositories, pages 92-101, 2014. Google Scholar
  12. Bryan Klimt and Yiming Yang. Introducing the enron corpus. In CEAS, 2004. Google Scholar
  13. David D Lewis and William A Gale. A sequential algorithm for training text classifiers. In SIGIR’94, pages 3-12. Springer, 1994. Google Scholar
  14. Andy Liaw, Matthew Wiener, et al. Classification and regression by randomforest. R news, 2(3):18-22, 2002. Google Scholar
  15. Song Liu, Yuanzhen Cui, Qing Jiang, Qian Wang, and Weiguo Wu. An efficient tile size selection model based on machine learning. Journal of Parallel and Distributed Computing, 121:27-41, 2018. Google Scholar
  16. Saeed Maleki, Yaoqing Gao, Maria J Garzar, Tommy Wong, David A Padua, et al. An evaluation of vectorizing compilers. In Intern. Conf. on Parallel Architectures and Compilation Techniques, pages 372-382. IEEE, 2011. Google Scholar
  17. FH McMahon. Livermore fortran kernels: A computer test of numerical performance range ucrl-53745. LLNL, CA., USA, 1986. Google Scholar
  18. James Pallister, Simon Hollis, and Jeremy Bennett. Beebs: Open benchmarks for energy measurements on embedded platforms. arXiv preprint, 2013. URL: http://arxiv.org/abs/1308.5174.
  19. Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. Librispeech: an asr corpus based on public domain audio books. In Intern. conf. on acoustics, speech and signal processing (ICASSP), pages 5206-5210. IEEE, 2015. Google Scholar
  20. SFX Thiago Teixeira, Corinne Ancourt, David Padua, and William Gropp. Locus: a system and a language for program optimization. In Intern. Symp. on Code Generation and Optimization (CGO), pages 217-228. IEEE, 2019. Google Scholar
  21. Dongrui Wu, Chin-Teng Lin, and Jian Huang. Active learning for regression using greedy sampling. Information Sciences, 474:90-105, 2019. Google Scholar
  22. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint, 2017. URL: http://arxiv.org/abs/1708.07747.
  23. T. Yuki and L. Pouchet. Polybench 4.2, Jan 26, 2021. URL: https://sourceforge.net/projects/polybench/.
  24. Tomofumi Yuki, Lakshminarayanan Renganarayanan, Sanjay Rajopadhye, Charles Anderson, Alexandre E Eichenberger, and Kevin O'Brien. Automatic creation of tile size selection models. In 8th Intern. Symp. on Code Generation and Optimization, pages 190-199, 2010. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail