Automated test case generation tools have been successfully proposed to reduce the amount of human and infrastructure resources required to write and run test cases. However, recent studies demonstrate that the readability of generated tests is very limited due to (i) uninformative identifiers and (ii) lack of proper documentation. Prior studies proposed techniques to improve test readability by either generating natural language summaries or meaningful methods names. While these approaches are shown to improve test readability, they are also affected by two limitations: (1) generated summaries are often perceived as too verbose and redundant by developers, and (2) readable tests require both proper method names but also meaningful identifiers (within-method readability).In this work, we combine template based methods and Deep Learning (DL) approaches to automatically generate test case scenarios (elicited from natural language patterns of test case statements) as well as to train DL models on path-based representations of source code to generate meaningful identifier names. Our approach, called DeepTC-Enhancer, recommends documentation and identifier names with the ultimate goal of enhancing readability of automatically generated test cases.An empirical evaluation with 36 external and internal developers shows that (1) DeepTC-Enhancer outperforms significantly the baseline approach for generating summaries and performs equally with the baseline approach for test case renaming, (2) the transformation proposed by DeepTC-Enhancer results in a significant increase in readability of automatically generated test cases, and (3) there is a significant difference in the feature preferences between external and internal developers.
Online learning has emerged as the “new norm” due to the COVID-19 crisis. Compared with institutions’ and teachers’ responses to online teaching, little is known about students’ perceived influence of online assessment practices. The present study explored the perceived effects of learning-oriented online assessment on L2 students’ feedback literacy and individual differences in feedback literacy development from an ecological perspective. We used multiple sources of data, including a survey on student feedback literacy, semi-structured interviews with two focal students, drafts produced by them and related teacher feedback, and supplementary data reflecting the online assessment practices in the course. Results demonstrated that the students held less favorable opinions of the online mode of learning in promoting feedback literacy. However, they perceived positively the development of feedback literacy in the aspects of appreciating feedback, developing judgements, and taking actions. Considerable variations were identified in the development of two focal students’ feedback literacy, especially in the aspects of managing affects and taking actions. The findings revealed the negative influence of misalignment between micro- and macro- factors on student feedback literacy and how such a misalignment interacted with learner factors to influence individual students’ feedback literacy when learning-oriented assessment (LOA) was implemented during COVID-19. The paper proposed a fine-grained model for developing student feedback literacy through learning-oriented online assessment. With a special focus on misalignment, the model provided insights into the interactional dynamics among learners, classroom, and larger contexts in using LOA to enhance student feedback literacy online. Relevant pedagogical implications for developing student feedback literacy within and beyond COVID-19 were discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.