CodeDJ: Reproducible Queries over Large-Scale Software Repositories (Artifact)

Authors Petr Maj , Konrad Siek¹ , Alexander Kovalenko , Jan Vitek



PDF
Thumbnail PDF

Artifact Description

DARTS.7.2.13.pdf
  • Filesize: 0.6 MB
  • 4 pages

Document Identifiers

Author Details

Petr Maj
  • Czech Technical University in Prague, Czech Republic
Konrad Siek¹
  • Czech Technical University in Prague, Czech Republic
Alexander Kovalenko
  • Czech Technical University in Prague, Czech Republic
Jan Vitek
  • Czech Technical University in Prague, Czech Republic
  • Northeastern University, Boston, MA, USA

Cite AsGet BibTex

Petr Maj, Konrad Siek¹, Alexander Kovalenko, and Jan Vitek. CodeDJ: Reproducible Queries over Large-Scale Software Repositories (Artifact). In Special Issue of the 35th European Conference on Object-Oriented Programming (ECOOP 2021). Dagstuhl Artifacts Series (DARTS), Volume 7, Issue 2, pp. 13:1-13:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)
https://doi.org/10.4230/DARTS.7.2.13

Artifact

Abstract

Analyzing massive code bases is a staple of modern software engineering research – a welcome side-effect of the advent of large-scale software repositories such as GitHub. Selecting which projects one should analyze is a labor-intensive process, and a process that can lead to biased results if the selection is not representative of the population of interest. One issue faced by researchers is that the interface exposed by software repositories only allows the most basic of queries. CodeDJ is an infrastructure for querying repositories composed of a persistent datastore, constantly updated with data acquired from GitHub, and an in-memory database with a Rust query interface. CodeDJ supports reproducibility, historical queries are answered deterministically using past states of the datastore; thus researchers can reproduce published results. To illustrate the benefits of CodeDJ, we identify biases in the data of a published study and, by repeating the analysis with new data, we demonstrate that the study’s conclusions were sensitive to the choice of projects.

Subject Classification

ACM Subject Classification
  • Software and its engineering → Ultra-large-scale systems
Keywords
  • Software
  • Mining Code Repositories
  • Source Code Analysis

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. T. F. Bissyande, F. Thung, D. Lo, L. Jiang, and L. Reveillere. Orion: A software project search engine with integrated diverse software artifacts. In International Conference on Engineering of Complex Computer Systems, 2013. URL: https://doi.org/10.1109/ICECCS.2013.42.
  2. Roberto Di Cosmo and Stefano Zacchiroli. Software Heritage: Why and How to Preserve Software Source Code. International Conference on Digital Preservation, 2017. URL: https://hal.archives-ouvertes.fr/hal-01590958.
  3. Robert Dyer, Hoan Anh Nguyen, Hridesh Rajan, and Tien N. Nguyen. Boa: A language and infrastructure for analyzing ultra-large-scale software repositories. In International Conference on Software Engineering (ICSE), 2013. URL: http://dl.acm.org/citation.cfm?id=2486788.2486844.
  4. Davide Falessi, Wyatt Smith, and Alexander Serebrenik. Stress: A semi-automated, fully replicable approach for project selection. In International Symposium on Empirical Software Engineering and Measurement (ESEM), 2017. URL: https://doi.org/10.1109/ESEM.2017.22.
  5. Jesus M. Gonzalez-Barahona, Gregorio Robles, and Santiago Dueñas. Collecting data about FLOSS development: The FLOSSMetrics experience. In International Workshop on Emerging Trends in Free/Libre/Open Source Software Research and Development (FLOSS), 2010. URL: https://doi.org/10.1145/1833272.1833278.
  6. Georgios Gousios and Diomidis Spinellis. GHTorrent: GitHub’s data from a firehose. In Michael W. Godfrey and Jim Whitehead, editors, Working Conference on Mining Software Repositories (MSR), 2012. URL: https://doi.org/10.1109/MSR.2012.6224294.
  7. Crista Lopes, Petr Maj, Pedro Martins, Di Yang, Jakub Zitny, Hitesh Sajnani, and Jan Vitek. Déjà Vu: A map of code duplicates on GitHub. Proc. ACM Program. Lang., (OOPSLA), 2017. URL: https://doi.org/10.1145/3133908.
  8. Hitesh Sajnani, Vaibhav Saini, Jeffrey Svajlenko, Chanchal K. Roy, and Cristina V. Lopes. Sourcerercc: scaling code clone detection to big-code. In International Conference on Software Engineering (ICSE), 2016. URL: https://doi.org/10.1145/2884781.2884877.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail