2007
DOI: 10.1109/tmm.2007.900153
|View full text |Cite
|
Sign up to set email alerts
|

Modeling Human Judgment of Digital Imagery for Multimedia Retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2007
2007
2012
2012

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(12 citation statements)
references
References 12 publications
0
12
0
Order By: Relevance
“…To cope with the demand for labeled examples, Lin et al initiated a collaborative annotation effort in the TRECVID 2003 benchmark [30]. Using tools from Christel et al [31] and Volkmer et al [32], [33] a common annotation effort was again made for the TRECVID 2005 benchmark, yielding a large and accurate set of labeled examples for 39 concepts taken from a predefined collection [4]. We provided an extension of this compilation, increasing the collection to 101 concept annotations, and also donated the lowlevel features, classifier models, and resulting concept detectors for this set of concepts on TRECVID 2005 and 2006 data as part of the MediaMill Challenge [3].…”
Section: Related Workmentioning
confidence: 99%
“…To cope with the demand for labeled examples, Lin et al initiated a collaborative annotation effort in the TRECVID 2003 benchmark [30]. Using tools from Christel et al [31] and Volkmer et al [32], [33] a common annotation effort was again made for the TRECVID 2005 benchmark, yielding a large and accurate set of labeled examples for 39 concepts taken from a predefined collection [4]. We provided an extension of this compilation, increasing the collection to 101 concept annotations, and also donated the lowlevel features, classifier models, and resulting concept detectors for this set of concepts on TRECVID 2005 and 2006 data as part of the MediaMill Challenge [3].…”
Section: Related Workmentioning
confidence: 99%
“…We suspect that the level of correctness of our annotations was higher than usual thanks to the special attention the Netherlands Institute for Sound & Vision gave to the collection they prepared for TRECVID. In many cases, of course, human annotators do err and disagree [28]. The timeconsuming nature of human annotation causes the number of subject terms per program to be low, much lower than the number of topics that is visible in the video.…”
Section: Existing Gtaa Relationsmentioning
confidence: 99%
“…We suspect that the level of correctness of our annotations was higher than usual thanks to the special attention the Netherlands Institute for Sound and Vision gave to the collection they prepared for TRECVid. In many cases, of course, human annotators do err and disagree [17]. The time-consuming nature of human annotation causes the number of subject terms per program to be low, much lower than the number of topics that is visible in the video.…”
Section: Existing Gtaa Relationsmentioning
confidence: 99%