2021 IEEE/ACM 29th International Conference on Program Comprehension (ICPC) 2021
DOI: 10.1109/icpc52881.2021.00054
|View full text |Cite
|
Sign up to set email alerts
|

Let’s Ask Students About Their Programs, Automatically

Abstract: Students sometimes produce code that works but that its author does not comprehend. For example, a student may apply a poorly-understood code template, stumble upon a working solution through trial and error, or plagiarize. Similarly, passing an automated functional assessment does not guarantee that the student understands their code. One way to tackle these issues is to probe students' comprehension by asking them questions about their own programs. We propose an approach to automatically generate questions … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 40 publications
(50 reference statements)
0
8
0
Order By: Relevance
“…While giving KTC and KC feedback can be important in automated assessment tools for CS assignments, these feedback types are too dependent on the tasks' deinition, being more a matter of coniguration, either to the instructor or exercise author, than a research need. Furthermore, as most of the assignments considered are open-answer, these tools are not commonplace to incorporate KMC feedback, even though a few recent attempts exist [142,188,189]. Hence, research has concentrated eforts on KM and KH types of feedback.…”
Section: Feedback (Rq4)mentioning
confidence: 99%
“…While giving KTC and KC feedback can be important in automated assessment tools for CS assignments, these feedback types are too dependent on the tasks' deinition, being more a matter of coniguration, either to the instructor or exercise author, than a research need. Furthermore, as most of the assignments considered are open-answer, these tools are not commonplace to incorporate KMC feedback, even though a few recent attempts exist [142,188,189]. Hence, research has concentrated eforts on KM and KH types of feedback.…”
Section: Feedback (Rq4)mentioning
confidence: 99%
“…A QLC may address any aspect that relates to a piece of code, ranging from questions related to terminology, mastery of programming primitives, algorithmic strategy, relation to other pieces of code, program dynamics, etc [11]. Depending on the question's nature, some are suitable to determine the correct answer automatically, whereas others are not, particularly those with an open-ended answer (e.g., "explain the purpose of this code segment").…”
Section: Questions About Learners' Codementioning
confidence: 99%
“…In previous work we proposed the notion of Questions about Learner's Code (QLC) [11] as a learning activity to promote selfreflection and contribute to deeper comprehension of programs. These are questions obtained by static and dynamic analysis of a learner's own program (e.g., possibly submitted to an assessment system) that ask about program characteristics in terms of programming concepts (e.g., Which is the role of variable [i] in function [f]?, with f being a function authored by the learner and i a variable therein).…”
Section: Introductionmentioning
confidence: 99%
“…Hassan and Zilles propose a novel 'reverse-tracing' question format which is non-trivial even when students have access to a computer Hassan and Zilles (2021), and Lehtinen et al explored automatically creating multiple choice questions from students' own code Lehtinen et al (2021). A variety of approaches and tools for helping students develop code tracing skills have also been reported Xie et al (2018); Qi and Fossati (2020).…”
Section: Code Explanations and Their Assessmentmentioning
confidence: 99%
“…With respect to the code explanations, future work should explore whether these could be used as the basis for generating multiple-choice questions related to the student's own code, similar to prior work Lehtinen et al (2021), which could serve as a reflection task. For example, one could create an explanation of the student's program as well as several other explanations for slight modifications to this program, similar in methodology to mutation testing Jia and Harman (2010) (e.g.…”
Section: Future Workmentioning
confidence: 99%