2014
DOI: 10.1007/s10270-014-0431-3
|View full text |Cite
|
Sign up to set email alerts
|

Identifying duplicate functionality in textual use cases by aligning semantic actions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(14 citation statements)
references
References 27 publications
0
12
0
Order By: Relevance
“…Rago et al [15] apply natural language processing and machine learning techniques to the problem of searching duplicate functionality in requirement specifications. These documents are considered as a set of textual use cases; the approach extracts sequence chain (usage scenarios) for every use case and compares the pairs of chains to find duplicate subchains.…”
Section: Related Workmentioning
confidence: 99%
“…Rago et al [15] apply natural language processing and machine learning techniques to the problem of searching duplicate functionality in requirement specifications. These documents are considered as a set of textual use cases; the approach extracts sequence chain (usage scenarios) for every use case and compares the pairs of chains to find duplicate subchains.…”
Section: Related Workmentioning
confidence: 99%
“…This is an extended abstract of the journal article with the same name, for the MODELS 2015 Conference [1].…”
Section: Introductionmentioning
confidence: 99%
“…Given the detailed information provided by our scenario analysis approach (e.g., it indicates the source of the defect), we have developed a rule-based heuristic as part of our scenario analysis approach in order to recommend fixes to requirements engineers, so that they can review scenario descriptions and deal with defects that hurt the properties related to unambiguity, completeness and consistency via refactoring of scenarios. A similar strategy based on recommendation tables was proposed by Rago et al (2014).…”
Section: Recommending Fixes For Defectsmentioning
confidence: 99%
“…Parsing strategy returns a parse tree based on statistical analysis of POS tags; however, POS tagging strategies do not performs this task with high precision, such as demonstrated in Table 27. Table 27 shows the POS tagging results returned by Stanford (2015), NLTK (2015) and Compendium-js (2015) In Table 27, it is possible to notice that Stanford (2015) and NLTK (2015) tools did not identify the main verbs of three sentences. These verbs are tagged as "Nouns": "Process", "Downloads" and "Types".…”
Section: Phrase-structure Parsingmentioning
confidence: 99%
See 1 more Smart Citation