2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2015
DOI: 10.1109/cvpr.2015.7298658
|View full text |Cite
|
Sign up to set email alerts
|

Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
256
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 389 publications
(276 citation statements)
references
References 23 publications
3
256
0
Order By: Relevance
“…In this work another citizen annotated dataset was used and the results show robustness to data corruption. Hence, these results confirm the discussion of [19] and motivate the use of citizen science. How much impact data corruption has in training, evaluation, and how big the dataset has to be to deal with this annotation errors, are important questions that must be answered.…”
Section: Discussionsupporting
confidence: 85%
See 1 more Smart Citation
“…In this work another citizen annotated dataset was used and the results show robustness to data corruption. Hence, these results confirm the discussion of [19] and motivate the use of citizen science. How much impact data corruption has in training, evaluation, and how big the dataset has to be to deal with this annotation errors, are important questions that must be answered.…”
Section: Discussionsupporting
confidence: 85%
“…Although the comparison between experts and amateurs gives high accuracy, some mistakes occurred. In [19] it is discussed how learning algorithms are robust to annotation errors and training data corruption. This assumption is true only if the training set has sufficient samples.…”
Section: Discussionmentioning
confidence: 99%
“…The distinction of finegrained categories has been studied deeply in the past [4,19,15,86,64] where applications range from fashion style recognition [30] or cars [33] to more biodiversity-driven scenarios like recognition of flowers [38], birds [82,80], dogs [29] or moths [63]. One of the most recent and promising developments is the guidance of attention to identify meaningful parts of objects [43,68] refined by advanced pooling approaches [41,69].…”
Section: Automated Detection and Identificationmentioning
confidence: 99%
“…However, any human with good eyesight is an expert in the task of detecting a person or a dog in a picture. We found that the workforce in plentiful supply on AMT holds value even for fine grained visual categorization -the study mentioned above [14] found that expert and non-expert annotators perform roughly equally well in part localization tasks (e.g., "click on the eye"). This is good news since training a machine to identify these landmarks is a key step for species classification ( Section 2.1 ).…”
Section: Harvesting Annotations From Experts and Non-expertsmentioning
confidence: 84%
“…The answer is, unfortunately, 'no.' A recent study [14] reveals that the fine grained categories in CUB-200 [15] and ImageNet [16] , both of which used AMT to clean the datasets, have significant type I and type II errors. While as computer vision researchers we aspire to develop classification models that are robust to such errors, in the context of Visipedia image labels that are 'more or less correct' will not suffice.…”
Section: Harvesting Annotations From Experts and Non-expertsmentioning
confidence: 99%