CodeDJ: Reproducible Queries over Large-Scale Software Repositories

Authors Petr Maj , Konrad Siek , Alexander Kovalenko , Jan Vitek



PDF
Thumbnail PDF

File

LIPIcs.ECOOP.2021.6.pdf
  • Filesize: 1.18 MB
  • 24 pages

Document Identifiers

Author Details

Petr Maj
  • Czech Technical University in Prague, Czech Republic
Konrad Siek
  • Czech Technical University in Prague, Czech Republic
Alexander Kovalenko
  • Czech Technical University in Prague, Czech Republic
Jan Vitek
  • Czech Technical University in Prague, Czech Republic
  • Northeastern University, Boston, MA, USA

Cite AsGet BibTex

Petr Maj, Konrad Siek, Alexander Kovalenko, and Jan Vitek. CodeDJ: Reproducible Queries over Large-Scale Software Repositories. In 35th European Conference on Object-Oriented Programming (ECOOP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 194, pp. 6:1-6:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)
https://doi.org/10.4230/LIPIcs.ECOOP.2021.6

Abstract

Analyzing massive code bases is a staple of modern software engineering research – a welcome side-effect of the advent of large-scale software repositories such as GitHub. Selecting which projects one should analyze is a labor-intensive process, and a process that can lead to biased results if the selection is not representative of the population of interest. One issue faced by researchers is that the interface exposed by software repositories only allows the most basic of queries. CodeDJ is an infrastructure for querying repositories composed of a persistent datastore, constantly updated with data acquired from GitHub, and an in-memory database with a Rust query interface. CodeDJ supports reproducibility, historical queries are answered deterministically using past states of the datastore; thus researchers can reproduce published results. To illustrate the benefits of CodeDJ, we identify biases in the data of a published study and, by repeating the analysis with new data, we demonstrate that the study’s conclusions were sensitive to the choice of projects.

Subject Classification

ACM Subject Classification
  • Software and its engineering → Ultra-large-scale systems
Keywords
  • Software
  • Mining Code Repositories
  • Source Code Analysis

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Miltiadis Allamanis. The adverse effects of code duplication in machine learning models of code. In Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (Onward!), 2019. URL: https://doi.org/10.1145/3359591.3359735.
  2. Emery D. Berger, Celeste Hollenbeck, Petr Maj, Olga Vitek, and Jan Vitek. On the impact of programming languages on code quality: A reproduction study. ACM Trans. Program. Lang. Syst., 41(4):21:1-21:24, 2019. URL: https://doi.org/10.1145/3340571.
  3. T. F. Bissyande, F. Thung, D. Lo, L. Jiang, and L. Reveillere. Orion: A software project search engine with integrated diverse software artifacts. In International Conference on Engineering of Complex Computer Systems, 2013. URL: https://doi.org/10.1109/ICECCS.2013.42.
  4. Hudson Borges, André C. Hora, and Marco Tulio Valente. Understanding the factors that impact the popularity of GitHub repositories. CoRR, 2016. URL: http://arxiv.org/abs/1606.04984.
  5. Andy Cockburn, Pierre Dragicevic, Lonni Besançon, and Carl Gutwin. Threats of a replication crisis in empirical computer science. Communications of the ACM, 2020. URL: https://doi.org/10.1145/3360311.
  6. Roberto Di Cosmo and Stefano Zacchiroli. Software Heritage: Why and How to Preserve Software Source Code. International Conference on Digital Preservation, 2017. URL: https://hal.archives-ouvertes.fr/hal-01590958.
  7. Robert Dyer, Hoan Anh Nguyen, Hridesh Rajan, and Tien N. Nguyen. Boa: A language and infrastructure for analyzing ultra-large-scale software repositories. In International Conference on Software Engineering (ICSE), 2013. URL: http://dl.acm.org/citation.cfm?id=2486788.2486844.
  8. Davide Falessi, Wyatt Smith, and Alexander Serebrenik. Stress: A semi-automated, fully replicable approach for project selection. In International Symposium on Empirical Software Engineering and Measurement (ESEM), 2017. URL: https://doi.org/10.1109/ESEM.2017.22.
  9. Jesus M. Gonzalez-Barahona, Gregorio Robles, and Santiago Dueñas. Collecting data about FLOSS development: The FLOSSMetrics experience. In International Workshop on Emerging Trends in Free/Libre/Open Source Software Research and Development (FLOSS), 2010. URL: https://doi.org/10.1145/1833272.1833278.
  10. Georgios Gousios and Diomidis Spinellis. GHTorrent: GitHub’s data from a firehose. In Michael W. Godfrey and Jim Whitehead, editors, Working Conference on Mining Software Repositories (MSR), 2012. URL: https://doi.org/10.1109/MSR.2012.6224294.
  11. Eirini Kalliamvakou, Georgios Gousios, Kelly Blincoe, Leif Singer, Daniel M. German, and Daniela Damian. The promises and perils of mining GitHub. In Working Conference on Mining Software Repositories (MSR), 2014. URL: https://doi.org/10.1145/2597073.2597074.
  12. P. S. Kochhar, T. F. Bissyandé, D. Lo, and L. Jiang. Adoption of software testing in open source projects-a preliminary study on 50,000 projects. In European Conference on Software Maintenance and Reengineering, 2013. URL: https://doi.org/10.1109/CSMR.2013.48.
  13. Crista Lopes, Petr Maj, Pedro Martins, Di Yang, Jakub Zitny, Hitesh Sajnani, and Jan Vitek. Déjà Vu: A map of code duplicates on GitHub. Proc. ACM Program. Lang., 1(OOPSLA), 2017. URL: https://doi.org/10.1145/3133908.
  14. Meiyappan Nagappan, Thomas Zimmermann, and Christian Bird. Diversity in software engineering research. In Foundations of Software Engineering (FSE), 2013. URL: https://doi.org/10.1145/2491411.2491415.
  15. Baishakhi Ray, Daryl Posnett, Vladimir Filkov, and Premkumar Devanbu. A large scale study of programming languages and code quality in github. In International Symposium on Foundations of Software Engineering (FSE), 2014. URL: https://doi.org/10.1145/2635868.2635922.
  16. Hitesh Sajnani, Vaibhav Saini, Jeffrey Svajlenko, Chanchal K. Roy, and Cristina V. Lopes. Sourcerercc: scaling code clone detection to big-code. In International Conference on Software Engineering (ICSE), 2016. URL: https://doi.org/10.1145/2884781.2884877.
  17. Gerald Schermann, Sali Zumberi, and Jürgen Cito. Structured information on state and evolution of dockerfiles on github. In International Conference on Mining Software Repositories (MSR), 2018. URL: https://doi.org/10.1145/3196398.3196456.
  18. Christopher Vendome, Gabriele Bavota, Massimiliano Di Penta, Mario Linares-Vásquez, Daniel German, and Denys Poshyvanyk. License usage and changes: a large-scale study on GitHub. Empirical Software Engineering, 2016. URL: https://doi.org/10.1007/s10664-016-9438-4.
  19. Hadley Wickham, Mara Averick, Jennifer Bryan, Winston Chang, Lucy D'Agostino McGowan, Romain François, Garrett Grolemund, Alex Hayes, Lionel Henry, Jim Hester, Max Kuhn, Thomas Lin Pedersen, Evan Miller, Stephan Milton Bache, Kirill Müller, Jeroen Ooms, David Robinson, Dana Paige Seidel, Vitalie Spinu, Kohske Takahashi, Davis Vaughan, Claus Wilke, Kara Woo, and Hiroaki Yutani. Welcome to the tidyverse. Journal of Open Source Software, 4(43):1686, 2019. URL: https://doi.org/10.21105/joss.01686.
  20. Matei Zaharia, Mosharaf Chowdhury, Michael J. Franklin, Scott Shenker, and Ion Stoica. Spark: Cluster computing with working sets. In Conference on Hot Topics in Cloud Computing (HotCloud), 2010. URL: https://doi.org/10.5555/1863103.1863113.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail