Shifting programming education assessment from exercise outputs toward deeper comprehension (Invited Talk)

Author André L. Santos



PDF
Thumbnail PDF

File

OASIcs.ICPEC.2023.1.pdf
  • Filesize: 422 kB
  • 5 pages

Document Identifiers

Author Details

André L. Santos
  • Instituto Universitário de Lisboa (ISCTE-IUL), ISTAR-IUL, Portugal

Acknowledgements

I thank the ICPEC organizing committee for this Invited Talk.

Cite As Get BibTex

André L. Santos. Shifting programming education assessment from exercise outputs toward deeper comprehension (Invited Talk). In 4th International Computer Programming Education Conference (ICPEC 2023). Open Access Series in Informatics (OASIcs), Volume 112, pp. 1:1-1:5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023) https://doi.org/10.4230/OASIcs.ICPEC.2023.1

Abstract

Practice and assessment in introductory programming courses are typically centered on problems that require students to write code to produce specific outputs. While these exercises are necessary and useful for providing practice and mastering syntax, their solutions may not effectively measure the learners’ real understanding of programming concepts. Misconceptions and knowledge gaps may be hidden under an exercise solution with correct outputs. Furthermore, obtaining answers has never been so easy in the present era of chatbots, so why should we care (much) about the solutions? Learning a skill is a process that requires iteration and failing, where feedback is of utmost importance. A programming exercise is a means to build up reasoning capabilities and strategic knowledge, not an end in itself. It is the process that matters most, not the exercise solution. Assessing if the learning process was effective requires much more than checking outputs.
I advocate that introductory programming learning could benefit from placing more emphasis on assessing learner comprehension, over checking outputs. Does this mean that we should not check if the results are correct? Certainly not, but a significant part of the learning process would focus on assessing and providing feedback regarding the comprehension of the written code and underlying concepts. Automated assessment systems would reflect this shift by comprising evaluation items for such a purpose, with adequate feedback. Achieving this involves numerous challenges and innovative technical approaches. In this talk, I present an overview of past and future work on tools that integrate code comprehension aspects in the process of solving programming exercises.

Subject Classification

ACM Subject Classification
  • Social and professional topics → Computer science education
  • Applied computing → Computer-assisted instruction
Keywords
  • Introductory programming
  • assessment
  • comprehension

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Fatima Abu Deeb and Timothy Hickey. Reflective debugging in Spinoza V3.0. In Australasian Computing Education Conference, ACE '21, pages 125-130, New York, NY, USA, 2021. Association for Computing Machinery. URL: https://doi.org/10.1145/3441636.3442313.
  2. Francisco Alfredo, André L. Santos, and Nuno Garrido. Sprinter: A didactic linter for structured programming. In Alberto Simões and João Carlos Silva, editors, Third International Computer Programming Education Conference, ICPEC 2022, June 2-3, 2022, Polytechnic Institute of Cávado and Ave (IPCA), Barcelos, Portugal, volume 102 of OASIcs, pages 2:1-2:8. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2022. URL: https://doi.org/10.4230/OASIcs.ICPEC.2022.2.
  3. Aivar Annamaa. Introducing Thonny, a Python IDE for learning programming. In Proceedings of the 15th Koli Calling Conference on Computing Education Research, Koli Calling '15, pages 117-121, New York, NY, USA, 2015. Association for Computing Machinery. URL: https://doi.org/10.1145/2828959.2828969.
  4. Robert K. Atkinson, Sharon J. Derry, Alexander Renkl, and Donald Wortham. Learning from examples: Instructional principles from the worked examples research. Review of Educational Research, 70(2):181-214, 2000. URL: https://doi.org/10.3102/00346543070002181.
  5. Elisa Baniassad, Lucas Zamprogno, Braxton Hall, and Reid Holmes. Stop the (autograder) insanity: Regression penalties to deter autograder overreliance. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education, SIGCSE '21, pages 1062-1068, New York, NY, USA, 2021. Association for Computing Machinery. URL: https://doi.org/10.1145/3408877.3432430.
  6. Jens Bennedsen and Carsten Schulte. BlueJ visual debugger for learning the execution of object-oriented programs? ACM Transactions on Computing Education, 10(2):8:1-8:22, June 2010. URL: https://doi.org/10.1145/1789934.1789938.
  7. Luca Chiodini, Igor Moreno Santos, Andrea Gallidabino, Anya Tafliovich, André L. Santos, and Matthias Hauswirth. A curated inventory of programming language misconceptions. In Proceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education V. 1, ITiCSE '21, pages 380-386, New York, NY, USA, 2021. Association for Computing Machinery. URL: https://doi.org/10.1145/3430665.3456343.
  8. James Finnie-Ansley, Paul Denny, Brett A. Becker, Andrew Luxton-Reilly, and James Prather. The robots are coming: Exploring the implications of openai codex on introductory programming. In Proceedings of the 24th Australasian Computing Education Conference, ACE '22, pages 10-19, New York, NY, USA, 2022. Association for Computing Machinery. URL: https://doi.org/10.1145/3511861.3511863.
  9. Jorge Gonçalves and André L. Santos. Jinter: a hint generation system for Java exercises. In 28th annual ACM conference on Innovation and Technology in Computer Science Education (ITiCSE) (to appear), 2023. Google Scholar
  10. Austin Z. Henley, Julian Ball, Benjamin Klein, Aiden Rutter, and Dylan Lee. An inquisitive code editor for addressing novice programmers' misconceptions of program behavior. In 43rd IEEE/ACM International Conference on Software Engineering: Software Engineering Education and Training, ICSE (SEET) 2021, Madrid, Spain, May 25-28, 2021, pages 165-170. IEEE, 2021. URL: https://doi.org/10.1109/ICSE-SEET52601.2021.00026.
  11. Cazembe Kennedy and Eileen T. Kraemer. Qualitative observations of student reasoning. In The 24th Annual Conference on Innovation and Technology in Computer Science Education, ITiCSE '19, pages 224-230. ACM, 2019. URL: https://doi.org/10.1145/3304221.3319751.
  12. Hieke Keuning, Bastiaan Heeren, and Johan Jeuring. A tutoring system to learn code refactoring. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education, SIGCSE '21, pages 562-568, New York, NY, USA, 2021. Association for Computing Machinery. URL: https://doi.org/10.1145/3408877.3432526.
  13. Päivi Kinnunen and Beth Simon. My program is ok - am I? computing freshmen’s experiences of doing programming assignments. Computer Science Education, 22(1):1-28, 2012. URL: https://doi.org/10.1080/08993408.2012.655091.
  14. Teemu Lehtinen, Lassi Haaranen, and Juho Leinonen. Automated questionnaires about students' JavaScript programs: Towards gauging novice programming processes. In Proceedings of the 25th Australasian Computing Education Conference, ACE 2023, Melbourne, VIC, Australia, 30 January 2023 - 3 February 2023, pages 49-58. ACM, 2023. URL: https://doi.org/10.1145/3576123.3576129.
  15. Teemu Lehtinen, Aleksi Lukkarinen, and Lassi Haaranen. Students struggle to explain their own program code. In Carsten Schulte, Brett A. Becker, Monica Divitini, and Erik Barendsen, editors, ITiCSE '21: Proceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education V.1, Virtual Event, Germany, June 26 - July 1, 2021, pages 206-212. ACM, 2021. URL: https://doi.org/10.1145/3430665.3456322.
  16. Teemu Lehtinen, André L. Santos, and Juha Sorva. Let’s ask students about their programs, automatically. In 29th IEEE/ACM International Conference on Program Comprehension, ICPC 2021, Madrid, Spain, May 20-21, 2021, pages 467-475. IEEE, 2021. URL: https://doi.org/10.1109/ICPC52881.2021.00054.
  17. José Carlos Paiva, José Paulo Leal, and Álvaro Figueira. Automated assessment in computer science education: A state-of-the-art review. ACM Trans. Comput. Educ., 22(3), June 2022. URL: https://doi.org/10.1145/3513140.
  18. Yizhou Qian and James Lehman. Students' misconceptions and other difficulties in introductory programming: A literature review. ACM Trans. Comput. Educ., 18(1), October 2017. URL: https://doi.org/10.1145/3077618.
  19. Teemu Rajala, Mikko-Jussi Laakso, Erkki Kaila, and Tapio Salakoski. Ville: A language-independent program visualization tool. In Proceedings of the Seventh Baltic Sea Conference on Computing Education Research - Volume 88, Koli Calling '07, pages 151-159, Darlinghurst, Australia, Australia, 2007. Australian Computer Society, Inc. URL: http://dl.acm.org/citation.cfm?id=2449323.2449340.
  20. André L. Santos. Enhancing visualizations in pedagogical debuggers by leveraging on code analysis. In Mike Joy and Petri Ihantola, editors, Proceedings of the 18th Koli Calling International Conference on Computing Education Research, Koli, Finland, November 22-25, 2018, pages 11:1-11:9. ACM, 2018. URL: https://doi.org/10.1145/3279720.3279732.
  21. André L. Santos, Tiago Soares, Nuno Garrido, and Teemu Lehtinen. Jask: Generation of questions about learners' code in Java. In Brett A. Becker, Keith Quille, Mikko-Jussi Laakso, Erik Barendsen, and Simon, editors, ITiCSE 2022: Innovation and Technology in Computer Science Education, Dublin, Ireland, July 8 - 13, 2022, Volume 1, pages 117-123. ACM, 2022. URL: https://doi.org/10.1145/3502718.3524761.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail