OASIcs.ICPEC.2023.1.pdf
- Filesize: 422 kB
- 5 pages
Practice and assessment in introductory programming courses are typically centered on problems that require students to write code to produce specific outputs. While these exercises are necessary and useful for providing practice and mastering syntax, their solutions may not effectively measure the learners’ real understanding of programming concepts. Misconceptions and knowledge gaps may be hidden under an exercise solution with correct outputs. Furthermore, obtaining answers has never been so easy in the present era of chatbots, so why should we care (much) about the solutions? Learning a skill is a process that requires iteration and failing, where feedback is of utmost importance. A programming exercise is a means to build up reasoning capabilities and strategic knowledge, not an end in itself. It is the process that matters most, not the exercise solution. Assessing if the learning process was effective requires much more than checking outputs. I advocate that introductory programming learning could benefit from placing more emphasis on assessing learner comprehension, over checking outputs. Does this mean that we should not check if the results are correct? Certainly not, but a significant part of the learning process would focus on assessing and providing feedback regarding the comprehension of the written code and underlying concepts. Automated assessment systems would reflect this shift by comprising evaluation items for such a purpose, with adequate feedback. Achieving this involves numerous challenges and innovative technical approaches. In this talk, I present an overview of past and future work on tools that integrate code comprehension aspects in the process of solving programming exercises.
Feedback for Dagstuhl Publishing