2019
DOI: 10.1007/s10664-019-09716-7
|View full text |Cite
|
Sign up to set email alerts
|

Mining non-functional requirements from App store reviews

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
48
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 66 publications
(50 citation statements)
references
References 65 publications
1
48
0
1
Order By: Relevance
“…Guzman et al [18] compared the performance using individual machine learning methods and their ensembles for automatically classifying reviews and found the ensembles can reach a better result. Jha and Mahmoud [19] focused on the non-functional requirements (e.g., security and performance) mining and used classification methods to capture those requirements in reviews. These studies can filter out some noise of app reviews and provide developers with specific types of reviews.…”
Section: App Review Classificationmentioning
confidence: 99%
“…Guzman et al [18] compared the performance using individual machine learning methods and their ensembles for automatically classifying reviews and found the ensembles can reach a better result. Jha and Mahmoud [19] focused on the non-functional requirements (e.g., security and performance) mining and used classification methods to capture those requirements in reviews. These studies can filter out some noise of app reviews and provide developers with specific types of reviews.…”
Section: App Review Classificationmentioning
confidence: 99%
“…Architects are sometimes involved in the elicitation and definition of quality requirements (Ameller et al 2012;Daneva et al 2013). There are several papers on using different kinds of user reviews on mobile app markets as a potential source of quality requirements (Groen et al 2017;Jha and Mahmoud 2019;Wang et al 2018;Lu and Liang 2017). This is sometimes referred to as CrowdRE (Glinz 2019) or data-driven requirements engineering (Maalej et al 2015).…”
Section: Related Workmentioning
confidence: 99%
“…Extracting concerns at a domain level can be a more challenging problem than focusing on single apps, which typically receive only a limited number of reviews or tweets per day (Mcilroy et al 2017). Furthermore, existing crowd feedback mining techniques are calibrated to extract technical user concerns, such as bug reports and feature requests, often ignoring other non-technical types of concerns that originate from the operational characteristics of the app (Jha and Mahmoud 2019;Martin et al 2017). These observations emphasize the need for new methods that can integrate multiple heterogeneous sources of user feedback to reflect a more accurate picture of the ecosystem.…”
Section: Research Gap and Motivationmentioning
confidence: 99%
“… 2017 ). Furthermore, existing crowd feedback mining techniques are calibrated to extract technical user concerns, such as bug reports and feature requests, often ignoring other non-technical types of concerns that originate from the operational characteristics of the app (Jha and Mahmoud 2019 ; Martin et al. 2017 ).…”
Section: Background Rationale and Research Questionsmentioning
confidence: 99%
See 1 more Smart Citation