Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering 2020
DOI: 10.1145/3324884.3416622
|View full text |Cite
|
Sign up to set email alerts
|

DeepTC-enhancer

Abstract: Automated test case generation tools have been successfully proposed to reduce the amount of human and infrastructure resources required to write and run test cases. However, recent studies demonstrate that the readability of generated tests is very limited due to (i) uninformative identifiers and (ii) lack of proper documentation. Prior studies proposed techniques to improve test readability by either generating natural language summaries or meaningful methods names. While these approaches are shown to improv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(17 citation statements)
references
References 46 publications
0
9
0
Order By: Relevance
“…Firstly, the number of participants (39) is aligned with previous studies in SBST [14], [15], and clearly superior to what is frequent in iSBSE [17]. In this sense, we consider that the sample size is adequate for the type of qualitative analysis presented.…”
Section: Threats To Validitymentioning
confidence: 75%
See 2 more Smart Citations
“…Firstly, the number of participants (39) is aligned with previous studies in SBST [14], [15], and clearly superior to what is frequent in iSBSE [17]. In this sense, we consider that the sample size is adequate for the type of qualitative analysis presented.…”
Section: Threats To Validitymentioning
confidence: 75%
“…For step 3, the actual interactive experiment, we chose ArrayIntList as the class under test. This is a well-known class of medium complexity used in other SBST experiments with participants related to test readability improvement [14], [15]. Due to the time required to complete the experiment and the cognitive burden of manually revising many tests, all participants used one class under test.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Braione et al [53] combined symbolic execution and SBST for programs with complex inputs; (3) readability of generated tests: Daka et al [54] proposed to assign names for tests by summarizing covered coverage goals. Roy et al [55] introduced deep learning approaches to generate test names; (4) fitness function design: Xu et al [56] proposed an adaptive fitness function for improving SBST. Rojas et al [4] proposed to combine multiple criteria to satisfy users' requirements.…”
Section: Related Workmentioning
confidence: 99%
“…Zhou et al [4] proposed a method to select coverage goals from multiple criteria instead of combining all goals; (4) Readability of created tests: Daka et al [70] suggested naming tests by stating covered goals. Deep learning techniques were presented by Roy et al [71]; (5) Applying SBST to more software fields such as Machine Learning libraries [72], Android applications [73], Web APIs [74], and Deep Neural Networks [75].…”
Section: Related Workmentioning
confidence: 99%