2011 IEEE 5th International Conference on Internet Multimedia Systems Architecture and Application 2011
DOI: 10.1109/imsaa.2011.6156351
|View full text |Cite
|
Sign up to set email alerts
|

Crowdsourcing annotation: Modelling keywords using low level features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
3
2
1

Relationship

3
3

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 10 publications
0
5
0
Order By: Relevance
“…Several investigators tested and checked the authenticity of the crowd-sourced image annotation. Mitry et al [25] contrasted the quality with that in experts of crowdsourced picture rating. They used 100 photographs of the retinal fundus, chosen by two researchers.…”
Section: Existing Workmentioning
confidence: 99%
“…Several investigators tested and checked the authenticity of the crowd-sourced image annotation. Mitry et al [25] contrasted the quality with that in experts of crowdsourced picture rating. They used 100 photographs of the retinal fundus, chosen by two researchers.…”
Section: Existing Workmentioning
confidence: 99%
“…This approach can easily address the problems in a cheap and quick way and helps task creators to exploit different opinions [34]. Implicit crowdsourced annotations can be easily gathered without burdening the involved users [64]. Since the collection is performed without supervision, it may contains several errors due to erroneous participant's feedback which is independent of whether or not they will receive a reward for their participation [10].…”
Section: Related Workmentioning
confidence: 99%
“…al. [17] used both vocabulary keywords and free keywords to check whether guided annotation (as assumed by the use of structured vocabulary) would increase annotation consistency. The researchers concluded that, indeed, by combing free keywords and vocabulary keywords annotation consistency increases compared to the use of free keywords alone.…”
Section: Related Workmentioning
confidence: 99%
“…Visual models are then fed with image features extracted from unseen images to predict their tagging [16]. Assuming that good visual models can be achieved, image retrieval using the training by example paradigm provides a promising alternative to text-based methods (since it does not require explicit annotation of all images in the collection, but only a small set of properly annotated images) [17]. Nevertheless, the first important step to create effective visual models is to use good training examples (pairs of images and annotations).…”
Section: Introductionmentioning
confidence: 99%