2018 IEEE 26th International Requirements Engineering Conference (RE) 2018
DOI: 10.1109/re.2018.00026
|View full text |Cite
|
Sign up to set email alerts
|

App Review Analysis Via Active Learning: Reducing Supervision Effort without Compromising Classification Accuracy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
33
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 49 publications
(34 citation statements)
references
References 32 publications
0
33
1
Order By: Relevance
“…By crowd [48,54,72], by textual data analysis [13,20,25,33,41,42,49,51,52,62,64,80,86,88,89], by prototyping [22], sentiment analysis [21,79], image and unstructured data analysis [21,73] 22…”
Section: Analysis and Validationmentioning
confidence: 99%
See 1 more Smart Citation
“…By crowd [48,54,72], by textual data analysis [13,20,25,33,41,42,49,51,52,62,64,80,86,88,89], by prototyping [22], sentiment analysis [21,79], image and unstructured data analysis [21,73] 22…”
Section: Analysis and Validationmentioning
confidence: 99%
“…For requirements analysis and validation, Mead et al [51] proposed that machine learning algorithms can be used to analyse individual Personae Non Gratae created by crowd users. To accommodate AI and exploit human intelligence in requirements analysis, Dhinakaran et al [88] proposed an active learning approach to classify requirements into features, bugs, rating and user experience. Recently, Williams et al [87] proposed that automated social mining and domain modeling techniques can be used to analyse mobile app success and failure stories to identify end-users' concerns of domain.…”
Section: A Research Map For Intelligent Crowdrementioning
confidence: 99%
“…Online reviews are the most frequently used type of dynamic data for eliciting requirements (53%), followed by micro-blogs (18%) and online discussions/forums (12%), software repositories Online reviews • Online reviews included app reviews, reviews compiled by experts, and online user reviews. Among the studies which used online reviews, a majority of the studies used app reviews as the sources of potential requirements (75%) [35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54]. Of them, 14 used app reviews from multiple distribution platforms such as Apple AppStore and Google Play to increase the level of generalizability, while eleven used those from a single distribution platform, and one did not specify the number of app distribution platforms • Of the studies which used online reviews, 17% (n = 6) extracted user reviews of software and video games [55], IoT products [56], compact cameras [57], internet security [58], Jira and Trello [59], and Jingdong.…”
Section: The Specific Types Of Dynamic Data Used For Automated Requirmentioning
confidence: 99%
“…Several different algorithms are often applied in the same study to compare the performance of classification. • Active learning based on uncertain sampling, which selects data points which a model is most uncertain about for manual labeling, was applied to classify app reviews into a feature request, bug report, rating, and user experience [35]. • A semi-supervised learning technique was used to classify app reviews into either functional or non-functional requirements [43].…”
Section: Approaches Used For Human-sourced Datamentioning
confidence: 99%
“…Online user forum and social media provided another channel to communicate with the wider user community and obtain diverse input and increase inclusivity. 11,12 Recent studies have shown that end-user reviews, 13 tweets 14 and posts 15 contain relevant and insightful information on user experience, 16 feature requests 17,18 and user rationale that can facilitate future design decisions. 4,19,20 These intuitive RE-related information can be found in mobile app stores, issue tracking systems, user forums and social media like Twitter.…”
mentioning
confidence: 99%