A Characteristic Study of Parameterized Unit Tests in .NET Open Source Projects

Authors Wing Lam, Siwakorn Srisakaokul, Blake Bassett, Peyman Mahdian, Tao Xie, Pratap Lakshman, Jonathan de Halleux



PDF
Thumbnail PDF

File

LIPIcs.ECOOP.2018.5.pdf
  • Filesize: 0.67 MB
  • 27 pages

Document Identifiers

Author Details

Wing Lam
  • University of Illinois at Urbana-Champaign, USA
Siwakorn Srisakaokul
  • University of Illinois at Urbana-Champaign, USA
Blake Bassett
  • University of Illinois at Urbana-Champaign, USA
Peyman Mahdian
  • University of Illinois at Urbana-Champaign, USA
Tao Xie
  • University of Illinois at Urbana-Champaign, USA
Pratap Lakshman
  • Microsoft, India
Jonathan de Halleux
  • Microsoft Research, USA

Cite As Get BibTex

Wing Lam, Siwakorn Srisakaokul, Blake Bassett, Peyman Mahdian, Tao Xie, Pratap Lakshman, and Jonathan de Halleux. A Characteristic Study of Parameterized Unit Tests in .NET Open Source Projects. In 32nd European Conference on Object-Oriented Programming (ECOOP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 109, pp. 5:1-5:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018) https://doi.org/10.4230/LIPIcs.ECOOP.2018.5

Abstract

In the past decade, parameterized unit testing has emerged as a promising method to specify program behaviors under test in the form of unit tests. Developers can write parameterized unit tests (PUTs), unit-test methods with parameters, in contrast to conventional unit tests, without parameters. The use of PUTs can enable powerful test generation tools such as Pex to have strong test oracles to check against, beyond just uncaught runtime exceptions. In addition, PUTs have been popularly supported by various unit testing frameworks for .NET and the JUnit framework for Java. However, there exists no study to offer insights on how PUTs are written by developers in either proprietary or open source development practices, posing barriers for various stakeholders to bring PUTs to widely adopted practices in software industry. To fill this gap, we first present categorization results of the Microsoft MSDN Pex Forum posts (contributed primarily by industrial practitioners) related to PUTs. We then use the categorization results to guide the design of the first characteristic study of PUTs in .NET open source projects. We study hundreds of PUTs that open source developers wrote for these open source projects. Our study findings provide valuable insights for various stakeholders such as current or prospective PUT writers (e.g., developers), PUT framework designers, test-generation tool vendors, testing researchers, and testing educators.

Subject Classification

ACM Subject Classification
  • Software and its engineering → Software testing and debugging
Keywords
  • Parameterized unit testing
  • automated test generation
  • unit testing

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Atom. URL: https://github.com/tivtag/Atom.
  2. PUT study project web. URL: https://sites.google.com/site/putstudy.
  3. QuickGraph. URL: https://github.com/tathanhdinh/QuickGraph.
  4. SearchCode code search. URL: https://searchcode.com.
  5. Theories in JUnit. URL: https://github.com/junit-team/junit/wiki/Theories.
  6. Using Indexers (C#Programming Guide). URL: https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/indexers/using-indexers.
  7. ConcurrentList. URL: https://github.com/damageboy/ConcurrentList.
  8. dotCover. URL: https://www.jetbrains.com/dotcover.
  9. GitHub code search. URL: https://github.com/search.
  10. The .NET compiler platform Roslyn. URL: https://github.com/dotnet/roslyn.
  11. NUnit Console. URL: https://github.com/nunit/nunit-console.
  12. OpenMheg. URL: https://github.com/orryverducci/openmheg.
  13. Parameterized Test Patterns for Microsoft Pex). URL: http://citeseerx.ist.psu.edu/viewdoc/download?rep=rep1&type=pdf&doi=10.1.1.216.282.
  14. Parameterized tests in JUnit. URL: https://github.com/junit-team/junit/wiki/Parameterized-tests.
  15. Stephan Arlt, Tobias Morciniec, Andreas Podelski, and Silke Wagner. If A fails, can B still succeed? Inferring dependencies between test results in automotive system testing. In ICST 2015: Proceedings of the 8th International Conference on Software Testing, Verification and Validation, pages 1-10, Graz, Austria, apr 2015. Google Scholar
  16. Michael Barnett, Manuel Fähndrich, Peli de Halleux, Francesco Logozzo, and Nikolai Tillmann. Exploiting the synergy between automated-test-generation and programming-by-contract. In ICSE 2009: Proceedings of the 31st International Conference on Software Engineering, pages 401-402, Vancouver, BC, Canada, may 2009. Google Scholar
  17. Patrice Chalin. Are practitioners writing contracts? In Rigorous Development of Complex Fault-Tolerant Systems, pages 100-113. Springer, 2006. Google Scholar
  18. Yingnong Dang, Dongmei Zhang, Song Ge, Chengyun Chu, Yingjun Qiu, and Tao Xie. XIAO: Tuning code clones at hands of engineers in practice. In ACSAC 2012: Proceedings of 28th Annual Computer Security Applications Conference, pages 369-378, Orlando, FL, USA, December 2012. Google Scholar
  19. Yingnong Dang, Dongmei Zhang, Song Ge, Ray Huang, Chengyun Chu, and Tao Xie. Transferring code-clone detection and analysis to practice. In ICSE 2017: Proceedings of the 39th International Conference on Software Engineering, Software Engineering in Practice (SEIP), pages 53-62, Buenos Aires, Argentina, May 2017. Google Scholar
  20. H-Christian Estler, Carlo A Furia, Martin Nordio, Marco Piccioni, and Bertrand Meyer. Contracts in practice. In FM 2014: Proceedings of the 19th International Symposium on Formal Methods, pages 230-246. Springer, Singapore, 2014. Google Scholar
  21. Gordon Fraser and Andreas Zeller. Generating parameterized unit tests. In ISSTA 2011: Proceedings of the 2011 International Symposium on Software Testing and Analysis, pages 364-374, Toronto, ON, Canada, jul 2011. Google Scholar
  22. Zebao Gao, Yalan Liang, Myra B. Cohen, Atif M. Memon, and Zhen Wang. Making system user interactive tests repeatable: When and what should we control? In ICSE 2015: Proceedings of the 37th International Conference on Software Engineering, pages 55-65, Florence, Italy, may 2015. Google Scholar
  23. Patrice Godefroid, Nils Klarlund, and Koushik Sen. DART: Directed automated random testing. In PLDI 2005: Proceedings of the ACM SIGPLAN 2005 Conference on Programming Language Design and Implementation, Chicago, IL, USA, jun 2005. Google Scholar
  24. John V. Guttag and James J. Horning. The algebraic specification of abstract data types. Acta Informatica, pages 27-52, 1978. Google Scholar
  25. C. A. R. Hoare. An axiomatic basis for computer programming. Communications of the ACM, pages 576-580, 1969. Google Scholar
  26. Pratap Lakshman. Visual Studio 2015 – Build better software with Smart Unit Tests. MSDN Magazine, 2015. Google Scholar
  27. Gary T. Leavens, Albert L. Baker, and Clyde Ruby. Preliminary design of JML: A behavioral interface specification language for Java. Technical Report TR 98-06i, Department of Computer Science, Iowa State University, Jun 1998. Google Scholar
  28. Qingzhou Luo, Farah Hariri, Lamyaa Eloussi, and Darko Marinov. An empirical analysis of flaky tests. In FSE 2014: Proceedings of the ACM SIGSOFT 22nd Symposium on the Foundations of Software Engineering, pages 643-653, Hong Kong, nov 2014. Google Scholar
  29. Gerard Meszaros. XUnit Test Patterns: Refactoring Test Code. Prentice Hall PTR, Upper Saddle River, NJ, USA, 2006. Google Scholar
  30. Bertrand Meyer. Applying "Design by Contract". Computer, pages 40-51, oct 1992. Google Scholar
  31. Microsoft. Pex MSDN discussion forum, April 2011. URL: http://social.msdn.microsoft.com/Forums/en-US/pex.
  32. Microsoft. Generate unit tests for your code with IntelliTest, 2015. URL: https://msdn.microsoft.com/library/dn823749.
  33. David Saff. Theory-infected: Or how I learned to stop worrying and love universal quantification. In OOPSLA Companion: Proceedings of the Object-Oriented Programming Systems, Languages, and Applications, pages 846-847, Montreal, QC, Canada, oct 2007. Google Scholar
  34. Todd W Schiller, Kellen Donohue, Forrest Coward, and Michael D Ernst. Case studies and tools for contract specifications. In ICSE 2014: Proceedings of the 36th International Conference on Software Engineering, pages 596-607, Hyderabad, India, jun 2014. Google Scholar
  35. Koushik Sen, Darko Marinov, and Gul Agha. CUTE: A concolic unit testing engine for C. In ESEC/FSE 2005: Proceedings of the 10th European Software Engineering Conference and the 13th ACM SIGSOFT Symposium on the Foundations of Software Engineering, pages 263-272, Lisbon, Portugal, sep 2005. Google Scholar
  36. Suresh Thummalapenta, Madhuri R Marri, Tao Xie, Nikolai Tillmann, and Jonathan de Halleux. Retrofitting unit tests for parameterized unit testing. In FASE 2011: Proceedings of the Fundamental Approaches to Software Engineering, pages 294-309. Springer, Saarbrücken, Germany, mar 2011. Google Scholar
  37. Nikolai Tillmann and Jonathan De Halleux. Pex: White box test generation for .NET. In TAP 2008: Proceedings of the 2nd International Conference on Tests And Proofs (TAP), pages 134-153, Prato, Italy, apr 2008. Google Scholar
  38. Nikolai Tillmann, Jonathan de Halleux, and Tao Xie. Parameterized unit testing: Theory and practice. In ICSE 2010: Proceedings of the 32nd International Conference on Software Engineering, pages 483-484, Cape Town, South Africa, may 2010. Google Scholar
  39. Nikolai Tillmann, Jonathan de Halleux, and Tao Xie. Transferring an automated test generation tool to practice: From Pex to Fakes and Code Digger. In ASE 2014: Proceedings of the 29th Annual International Conference on Automated Software Engineering, pages 385-396, Västerøas, Sweden, sep 2014. Google Scholar
  40. Nikolai Tillmann and Wolfram Schulte. Parameterized unit tests. In ESEC/FSE 2005: Proceedings of the 10th European Software Engineering Conference and the 13th ACM SIGSOFT Symposium on the Foundations of Software Engineering, pages 253-262, Lisbon, Portugal, 2005. Google Scholar
  41. Matias Waterloo, Suzette Person, and Sebastian Elbaum. Test analysis: Searching for faults in tests. In ASE 2015: Proceedings of the 30th Annual International Conference on Automated Software Engineering, pages 149-154, Lincoln, NE, USA, nov 2015. Google Scholar
  42. Xusheng Xiao, Tao Xie, Nikolai Tillmann, and Jonathan de Halleux. Precise identification of problems for structural test generation. In ICSE 2011: Proceedings of the 33rd International Conference on Software Engineering, pages 611-620, Waikiki, HI, USA, may 2011. Google Scholar
  43. Tao Xie, Nikolai Tillmann, Jonathan de Halleux, and Wolfram Schulte. Mutation analysis of parameterized unit tests. In ICSTW 2009: Proceedings of the International Conference on Software Testing, Verification and Validation Workshops, pages 177-181, Denver, CO, USA, 2009. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail