2023
DOI: 10.48550/arxiv.2302.06527
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adaptive Test Generation Using a Large Language Model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…1) During our preliminary concept-distillation experiments, ChatGPT returns a wide range of concepts, including highly relevant, irrelevant, and sometimes duplicated ones, during the looped execution. As discussed in [5], [6], with more specific context information and good examples come improved semantic accuracy and more focused responses. Thus, we need to provide illustrative examples in the prompt to distil those highly relevant concepts while eliminating the rest.…”
Section: B Framework Overviewmentioning
confidence: 89%
See 1 more Smart Citation
“…1) During our preliminary concept-distillation experiments, ChatGPT returns a wide range of concepts, including highly relevant, irrelevant, and sometimes duplicated ones, during the looped execution. As discussed in [5], [6], with more specific context information and good examples come improved semantic accuracy and more focused responses. Thus, we need to provide illustrative examples in the prompt to distil those highly relevant concepts while eliminating the rest.…”
Section: B Framework Overviewmentioning
confidence: 89%
“…Large language models (LLMs), such as GPT-3 [1], Codex [2], and ChatGPT [3] have made remarkable progress. Trained using an enormous amount of indiscriminate data from the entire internet, these LLMs embed knowledge from different domains, which are thus capable of answering questions, writing codes, drawing pictures, or translating languages across application areas [4]- [6]. In this paper, we aim to investigate if and how the knowledge of a specific application domain, e,g., scenario-based testing of autonomous vehicles, can be extracted to facilitate subsequent tasks, e.g.…”
Section: Introductionmentioning
confidence: 99%
“…Language models have been adapted to perform program fuzzing (Xia et al, 2023a;Deng et al, 2023), test generation (Schäfer et al, 2023), automated program repair (Xia et al, 2023b, and source-level algorithmic optimization Madaan et al (2023). The introduction of fill-in-the-middle capabilities is especially useful for software engineering use cases such as code completion, and has become common in recent code models such as InCoder (Fried et al, 2023), SantaCoder (Allal et al, 2023, StarCoder (Lozhkov et al, 2024), andCode Llama (Rozière et al, 2023).…”
Section: Related Workmentioning
confidence: 99%
“…There has been some research on using LLMs for white-box test generation. Schäfer et al [1] and Li et al [2] have explored ChatGPT in this regard, demonstrating its potential for generating effective white-box test cases. However, using LLMs for black-box testing has not been delved into much.…”
Section: Introductionmentioning
confidence: 99%