Abstract:Bugs in Scratch programs can spoil the fun and inhibit learning success. Many common bugs are the result of recurring patterns of bad code. In this paper we present a collection of common code patterns that typically hint at bugs in Scratch programs, and the LitterBox tool which can automatically detect them. We empirically evaluate how frequently these patterns occur, and how severe their consequences usually are. While fixing bugs inevitably is part of learning, the possibility to identify the bugs automatic… Show more
“…Our tool chain for anomaly detection for SCRATCH uses an extended version of Li t t e r Bo x [10] to generate a collection (mi , . .…”
Section: Methodsmentioning
confidence: 99%
“…We therefore conducted a sensitivity analysis on these two parameters with minimum size (= 2) and maximum deviation level (= 10000) as fixed variables, changing only minimum support and minimum confidence. For the minimum support we tested the values (1,5,10,15,20), where 20 is the default JADET value. For confidence we tested the values ( 0 .…”
Section: Methodsmentioning
confidence: 99%
“…• Bug pattern (defective): The violation hints at a defect that a generic SCRATCH linter such as Lit t e r Bo x [10] could find equally well.…”
Section: Methodsmentioning
confidence: 99%
“…It has been shown that various types of code smells are prevalent [1], [17], [36], [40] and have a negative impact on code understanding [16]. There are tools for finding code smells in Sc r a t c h programs such as Ha ir b a l l [5], Qu a l it y h o u n d [40] or SAT [6], and Li t t e r Bo x [10] detects predefined bug patterns automatically.…”
Section: Program Analysis For Scratchmentioning
confidence: 99%
“…We implemented our approach as an extension of Lit t e r -Bo x [10] and JADET [42], [44] and it is available at: https://github.com/se2p/scratch-anomalies…”
In programming education, teachers need to monitor and assess the progress of their students by investigating the code they write. Code quality of programs written in traditional programming languages can be automatically assessed with automated tests, verification tools, or linters. In many cases these approaches rely on some form of manually written formal specification to analyze the given programs. Writing such specifications, however, is hard for teachers, who are often not adequately trained for this task. Furthermore, automated tool support for popular block-based introductory programming languages like SCRATCH is lacking. Anomaly detection is an approach to automatically identify deviations of common behavior in datasets without any need for writing a specification. In this paper, we use anomaly detection to automatically find deviations of SCRATCH code in a classroom setting, where anomalies can represent erroneous code, alternative solutions, or distinguished work. Evaluation on solutions of different programming tasks demonstrates that anomaly detection can successfully be applied to tightly specified as well as open-ended programming tasks.
“…Our tool chain for anomaly detection for SCRATCH uses an extended version of Li t t e r Bo x [10] to generate a collection (mi , . .…”
Section: Methodsmentioning
confidence: 99%
“…We therefore conducted a sensitivity analysis on these two parameters with minimum size (= 2) and maximum deviation level (= 10000) as fixed variables, changing only minimum support and minimum confidence. For the minimum support we tested the values (1,5,10,15,20), where 20 is the default JADET value. For confidence we tested the values ( 0 .…”
Section: Methodsmentioning
confidence: 99%
“…• Bug pattern (defective): The violation hints at a defect that a generic SCRATCH linter such as Lit t e r Bo x [10] could find equally well.…”
Section: Methodsmentioning
confidence: 99%
“…It has been shown that various types of code smells are prevalent [1], [17], [36], [40] and have a negative impact on code understanding [16]. There are tools for finding code smells in Sc r a t c h programs such as Ha ir b a l l [5], Qu a l it y h o u n d [40] or SAT [6], and Li t t e r Bo x [10] detects predefined bug patterns automatically.…”
Section: Program Analysis For Scratchmentioning
confidence: 99%
“…We implemented our approach as an extension of Lit t e r -Bo x [10] and JADET [42], [44] and it is available at: https://github.com/se2p/scratch-anomalies…”
In programming education, teachers need to monitor and assess the progress of their students by investigating the code they write. Code quality of programs written in traditional programming languages can be automatically assessed with automated tests, verification tools, or linters. In many cases these approaches rely on some form of manually written formal specification to analyze the given programs. Writing such specifications, however, is hard for teachers, who are often not adequately trained for this task. Furthermore, automated tool support for popular block-based introductory programming languages like SCRATCH is lacking. Anomaly detection is an approach to automatically identify deviations of common behavior in datasets without any need for writing a specification. In this paper, we use anomaly detection to automatically find deviations of SCRATCH code in a classroom setting, where anomalies can represent erroneous code, alternative solutions, or distinguished work. Evaluation on solutions of different programming tasks demonstrates that anomaly detection can successfully be applied to tightly specified as well as open-ended programming tasks.
The importance of programming education has led to dedicated educational programming environments, where users visually arrange block-based programming constructs that typically control graphical, interactive game-like programs. The Scratch programming environment is particularly popular, with more than 90 million registered users at the time of this writing. While the block-based nature of Scratch helps learners by preventing syntactical mistakes, there nevertheless remains a need to provide feedback and support in order to implement desired functionality. To support individual learning and classroom settings, this feedback and support should ideally be provided in an automated fashion, which requires tests to enable dynamic program analysis. In prior work we introduced Whisker, a framework that enables automated testing of Scratch programs. However, creating these automated tests for Scratch programs is challenging. In this paper, we therefore investigate how to automatically generate Whisker tests. Generating tests for Scratch raises important challenges: First, game-like programs are typically randomised, leading to flaky tests. Second, Scratch programs usually consist of animations and interactions with long delays, inhibiting the application of classical test generation approaches. Thus, the new application domain raises the question of which test generation technique is best suited to produce high coverage tests capable of detecting faulty behaviour. We investigate these questions using an extension of the Whisker test framework for automated test generation. Evaluation on common programming exercises, a random sample of 1000 Scratch user programs, and the 1000 most popular Scratch programs demonstrates that our approach enables Whisker to reliably accelerate test executions, and even though many Scratch programs are small and easy to cover, there are many unique challenges for which advanced search-based test generation using many-objective algorithms is needed in order to achieve high coverage.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.