Use of Programming Aids in Undergraduate Courses

Authors Ana Rita Peixoto , André Glória , José Luís Silva , Maria Pinto-Albuquerque , Tomás Brandão , Luís Nunes



PDF
Thumbnail PDF

File

OASIcs.ICPEC.2024.20.pdf
  • Filesize: 0.49 MB
  • 9 pages

Document Identifiers

Author Details

Ana Rita Peixoto
  • Instituto Universitário de Lisboa (ISCTE-IUL), ISTAR, Portugal
André Glória
  • Instituto Universitário de Lisboa (ISCTE-IUL), Instituto de Telecomunicações, Portugal
José Luís Silva
  • ITI/LARSyS, Instituto Universitário de Lisboa (ISCTE-IUL), Portugal
Maria Pinto-Albuquerque
  • Instituto Universitário de Lisboa (ISCTE-IUL), ISTAR, Portugal
Tomás Brandão
  • Instituto Universitário de Lisboa (ISCTE-IUL), ISTAR, Portugal
Luís Nunes
  • Instituto Universitário de Lisboa (ISCTE-IUL), ISTAR, Portugal

Cite AsGet BibTex

Ana Rita Peixoto, André Glória, José Luís Silva, Maria Pinto-Albuquerque, Tomás Brandão, and Luís Nunes. Use of Programming Aids in Undergraduate Courses. In 5th International Computer Programming Education Conference (ICPEC 2024). Open Access Series in Informatics (OASIcs), Volume 122, pp. 20:1-20:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)
https://doi.org/10.4230/OASIcs.ICPEC.2024.20

Abstract

The use of external tips and applications to help with programming assignments, by novice programmers, is a double-edged sword, it can help by showing examples of problem-solving strategies, but it can also prevent learning because recognizing a good solution is not the same skill as creating one. A study was conducted during the 2superscript{nd} semester of 23/24 in the course of Object Oriented Programming to help understand the impact of the programming aids in learning. The main questions that drove this study were: Which type(s) of assistance do students use when learning to program? When / where do they use it? Does it affect grades? Results, even though with a relatively small sample, seem to indicate that students who used aids have a perception of improved learning when using advice from Colleagues, Copilot-style tools, and Large Language Models. Results of correlating average grades with the usage of tools suggest that experience in using these tools is key for its successful use, but, contrary to students' perceptions, learning gains are marginal in the end result.

Subject Classification

ACM Subject Classification
  • Social and professional topics → Computing education
Keywords
  • Teaching Programming
  • Programming aids

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Zhamri Che Ani, Zauridah Abdul Hamid, and Nur Nazifa Zhamri. The Recent Trends of Research on GitHub Copilot: A Systematic Review, pages 355-366. Springer, 2024. URL: https://doi.org/10.1007/978-981-99-9589-9_27.
  2. Breanna Jury, Angela Lorusso, Juho Leinonen, Paul Denny, and Andrew Luxton-Reilly. Evaluating llm-generated worked examples in an introductory programming course. In Nicole Herbert and Carolyn Seton, editors, Proceedings of the 26th Australasian Computing Education Conference, ACE 2024, Sydney, NSW, Australia, 29 January 2024- 2 February 2024, pages 77-86. ACM, January 2024. URL: https://doi.org/10.1145/3636243.3636252.
  3. Majeed Kazemitabaar, Justin Chow, Carl Ka To Ma, Barbara J. Ericson, David Weintrop, and Tovi Grossman. Studying the effect of AI code generators on supporting novice learners in introductory programming. In Albrecht Schmidt, Kaisa Väänänen, Tesh Goyal, Per Ola Kristensson, Anicia Peters, Stefanie Mueller, Julie R. Williamson, and Max L. Wilson, editors, Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI 2023, Hamburg, Germany, April 23-28, 2023, CHI '23, pages 455:1-455:23, New York, NY, USA, 2023. ACM. URL: https://doi.org/10.1145/3544548.3580919.
  4. Amy J. Ko. More than calculators: Why large language models threaten learning, teaching, and education, December 2023. URL: https://tinyurl.com/yck47y5s.
  5. Raymond Lister, Beth Simon, Errol Thompson, Jacqueline L. Whalley, and Christine Prasad. Not seeing the forest for the trees: novice programmers and the SOLO taxonomy. In Renzo Davoli, Michael Goldweber, and Paola Salomoni, editors, Proceedings of the 11th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education, ITiCSE 2006, Bologna, Italy, June 26-28, 2006, volume 38, pages 118-122, New York, NY, USA, June 2006. ACM. URL: https://doi.org/10.1145/1140124.1140157.
  6. Wenhan Lyu, Yimeng Wang, Tingting Chung, Yifan Sun, and Yixuan Zhang. Evaluating the effectiveness of llms in introductory computer science education: A semester-long field study. CoRR, abs/2404.13414, April 2024. URL: https://doi.org/10.48550/arXiv.2404.13414.
  7. Samiha Marwan, Joseph Jay Williams, and Thomas W. Price. An evaluation of the impact of automated programming hints on performance and learning. In Robert McCartney, Andrew Petersen, Anthony V. Robins, and Adon Moskal, editors, Proceedings of the 2019 ACM Conference on International Computing Education Research, ICER 2019, Toronto, ON, Canada, August 12-14, 2019, ICER '19, pages 61-70, New York, NY, USA, 2019. ACM. URL: https://doi.org/10.1145/3291279.3339420.
  8. James Prather, Brent N. Reeves, Paul Denny, Brett A. Becker, Juho Leinonen, Andrew Luxton-Reilly, Garrett B. Powell, James Finnie-Ansley, and Eddie Antonio Santos. "it’s weird that it knows what I want": Usability and interactions with copilot for novice programmers. ACM Trans. Comput. Hum. Interact., 31(1):4:1-4:31, November 2024. URL: https://doi.org/10.1145/3617367.
  9. Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, Juyuan Xu, Zhiyuan Liu, and Maosong Sun. Communicative agents for software development. CoRR, abs/2307.07924, July 2023. URL: https://doi.org/10.48550/arXiv.2307.07924.
  10. Martin P. Robillard, Wesley Coelho, and Gail C. Murphy. How effective developers investigate source code: An exploratory study. IEEE Trans. Software Eng., 30(12):889-903, 2004. URL: https://doi.org/10.1109/TSE.2004.101.
  11. Burak Yetistiren, Isik Özsoy, Miray Ayerdem, and Eray Tüzün. Evaluating the code quality of ai-assisted code generation tools: An empirical study on github copilot, amazon codewhisperer, and chatgpt. CoRR, abs/2304.10778, 2023. URL: https://doi.org/10.48550/arXiv.2304.10778.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail