2016
DOI: 10.1007/978-3-319-46254-7_44
|View full text |Cite
|
Sign up to set email alerts
|

Running Out of Words: How Similar User Stories Can Help to Elaborate Individual Natural Language Requirement Descriptions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 21 publications
0
8
0
Order By: Relevance
“…Some studies were authored/co-authored by the same person, indicating the existence of an active research group in this field. [33] Identify ambiguous user stories [34] Define and measure quality factors from user stories [4], [35] Obtain a security defect reporting form from user stories [36] Indicate duplication between user stories [37] Generate model/artifact Generate a test case from user stories [38]- [43] Generate a class diagram from user stories [44], [45] Generate a sequence diagram from user stories [46] Generate a use case diagram from user stories [47]- [49] Generate a use case scenario from user stories [50] Generate a multi-agent system from user stories [51] Generate a source code from user stories [40] Generate a BPMN diagram from user stories [40] Identify the key abstractions To understand the semantic connection in user stories [52]- [54] Identify topics and summarizing user stories [55], [56] Construct a goal model from a set of user stories. [57] Define ontology for user stories [58] Extract the conceptual model of user stories [59], [60] To find the linguistic structure of user stories [61] Prioritizing and estimation of user story complexity [62], [63] Extracting user stories from text [64]- [66] Trace links between model/NL requirements Tracking the development status of user stories from software artifacts [67] Identify the type of dependency of user stories [68] Traceability user stories and software artifact [69]…”
Section: Fig 4 Authorship Distribution Per Countrymentioning
confidence: 99%
See 1 more Smart Citation
“…Some studies were authored/co-authored by the same person, indicating the existence of an active research group in this field. [33] Identify ambiguous user stories [34] Define and measure quality factors from user stories [4], [35] Obtain a security defect reporting form from user stories [36] Indicate duplication between user stories [37] Generate model/artifact Generate a test case from user stories [38]- [43] Generate a class diagram from user stories [44], [45] Generate a sequence diagram from user stories [46] Generate a use case diagram from user stories [47]- [49] Generate a use case scenario from user stories [50] Generate a multi-agent system from user stories [51] Generate a source code from user stories [40] Generate a BPMN diagram from user stories [40] Identify the key abstractions To understand the semantic connection in user stories [52]- [54] Identify topics and summarizing user stories [55], [56] Construct a goal model from a set of user stories. [57] Define ontology for user stories [58] Extract the conceptual model of user stories [59], [60] To find the linguistic structure of user stories [61] Prioritizing and estimation of user story complexity [62], [63] Extracting user stories from text [64]- [66] Trace links between model/NL requirements Tracking the development status of user stories from software artifacts [67] Identify the type of dependency of user stories [68] Traceability user stories and software artifact [69]…”
Section: Fig 4 Authorship Distribution Per Countrymentioning
confidence: 99%
“…Five studies reported methods for finding defects or improving the quality of user stories. The category is meant to serve four purposes: (a) providing recommendations on incomplete requirements based on the knowledge gap [33]; (b) identifying ambiguous user stories [34]; (c) defining and measuring quality factors from user stories [4], [35]; (d) obtaining a security defect reporting form from the user stories [36] and (e) indicating duplications between user stories [37]. Bäumer and Geierhos [33] identified incomplete requirements with preprocessing, lemmatization, and POS tagging.…”
Section: ) Discovering Defectsmentioning
confidence: 99%
“…For example, the Cohn's template allows the benefit part to be optional. Baumer and Geierhos (2016) proposed an approach having no restriction of using any specified template for user stories. The main goal of this approach was to detect a missing argument and provide suggestion(s) to complete an incomplete requirement.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Moreover, the compensation for incompleteness often relies on exhaustive resources, which are hard to get. Although there are compensation approaches for incomplete software requirements [20,21], the underlying resources are still limited in scope.…”
Section: Current State Of Researchmentioning
confidence: 99%
“…Therefore, our developed trigger considers existing information to draw conclusions on how to fill in the missing slots. For incompleteness detection, we pursue the approach by Bäumer and Geierhos [20], which mainly relies on Semantic Role Labeling (SRL) and on a fine-grained analysis of the Predicate Argument Structure (PAS) of requirements. This detection and compensation of incompleteness is difficult because the lack of data has to be examined.…”
Section: Incompleteness Triggermentioning
confidence: 99%