Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement 2016
DOI: 10.1145/2961111.2962584
|View full text |Cite
|
Sign up to set email alerts
|

Towards Effectively Test Report Classification to Assist Crowdsourced Testing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
34
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 44 publications
(34 citation statements)
references
References 24 publications
0
34
0
Order By: Relevance
“…Finally terms except verbs and nouns are removed. Filtering meaningless terms like existing work [15], we obtain a vocabulary of technical terms.…”
Section: Baidu Crowdtest Datasetmentioning
confidence: 99%
See 2 more Smart Citations
“…Finally terms except verbs and nouns are removed. Filtering meaningless terms like existing work [15], we obtain a vocabulary of technical terms.…”
Section: Baidu Crowdtest Datasetmentioning
confidence: 99%
“…All these studies use crowdsourced testing to solve the problems in traditional software testing activities. There are studies focusing on solving the new encountered problem in crowdsourced testing, e.g., crowdsourced reports prioritization [5] and crowdsourced reports classification [15,16]. Our approach is also to solve the new encountered and important problem in crowdsourced testing.…”
Section: Rq2: Is Each Of the Three Objectives Necessary In Moose?mentioning
confidence: 99%
See 1 more Smart Citation
“…They proposed prioritization approaches based on text descriptions, screenshot images, and a combination of both sources of information. Wang et al [10] proposed a cluster-based classification approach for effectively classifying crowdsourced reports when plentiful training data are available. However, sufficient training data often are not available.…”
Section: Crowdsourced Software Testingmentioning
confidence: 99%
“…They designed strategies for dynamically selecting the riskiest and most diversified test reports for inspection in each iteration. Wang et al [10,11] proposed a cluster-based classification approach for effective classification of crowdsourced reports that addresses the local bias problem. Unfortunately, the Android bug reports do not have the severity labels for use as training data, and these approaches often require users to manually label a large number of training data, which is both time-consuming and labor-intensive in practice.…”
Section: Introductionmentioning
confidence: 99%