Proceedings of the 1st International Conference on AI Engineering: Software Engineering for AI 2022
DOI: 10.1145/3522664.3528589
|View full text |Cite
|
Sign up to set email alerts
|

Improving generalizability of ML-enabled software through domain specification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 38 publications
0
0
0
Order By: Relevance
“…Despite many calls for the importance of requirements in ML (e.g., Rahimi et al, 2019;Vogelsang and Borg, 2019), requirements in ML projects are often poorly understood and documented (Nahar et al, 2022), which means that testers can rarely rely on existing requirements to guide their testing. Requirements elicitation is usually a manual and laborious process (e.g., interviews, focus groups, document analysis, prototyping), but the community has long been interested in automating parts of the process (Meth et al, 2013), e.g., by automatically extracting domain concepts from unstructured text (Shen and Breaux, 2022;Barzamini et al, 2022a). We rely on the insight that LLMs contain knowledge for many domains that can be extracted as KBs (Wang et al, 2020;Cohen et al, 2023), and apply this idea to requirements elicitation.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Despite many calls for the importance of requirements in ML (e.g., Rahimi et al, 2019;Vogelsang and Borg, 2019), requirements in ML projects are often poorly understood and documented (Nahar et al, 2022), which means that testers can rarely rely on existing requirements to guide their testing. Requirements elicitation is usually a manual and laborious process (e.g., interviews, focus groups, document analysis, prototyping), but the community has long been interested in automating parts of the process (Meth et al, 2013), e.g., by automatically extracting domain concepts from unstructured text (Shen and Breaux, 2022;Barzamini et al, 2022a). We rely on the insight that LLMs contain knowledge for many domains that can be extracted as KBs (Wang et al, 2020;Cohen et al, 2023), and apply this idea to requirements elicitation.…”
Section: Related Workmentioning
confidence: 99%
“…Our technical implementation fundamentally relies on extracting knowledge from LLMs and will provide subpar guidance if the model has not captured relevant domain knowledge. Conceptually our approach to guide testing with domain knowledge would also work with other sources of the knowledge base, whether manually created, extracted from a text corpus (Shen and Breaux, 2022;Barzamini et al, 2022a), or crowdsourced (Metaxa et al, 2021).…”
Section: Limitationsmentioning
confidence: 99%