Proceedings of the International Conference on Multimedia Information Retrieval 2010
DOI: 10.1145/1743384.1743471
|View full text |Cite
|
Sign up to set email alerts
|

Object-based tag propagation for semi-automatic annotation of images

Abstract: Over the last few years, social network systems have greatly increased users' involvement in online content creation and annotation. Since such systems usually need to deal with a large amount of multimedia data, it becomes desirable to realize an interactive service that minimizes tedious and timeconsuming manual annotation. In this paper, we propose an interactive online platform that is capable of performing semi-automatic image annotation and tag recommendation for an extensive online database. First, when… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2010
2010
2021
2021

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 18 publications
0
6
0
Order By: Relevance
“…The number of tags we delete is chosen uniformly at random, with the only constraint of leaving a minimum number of input tags of |Γ I | ≥ 3 so that there is presumably enough information for the recommender systems to provide good recommendations [7]. This constraint also implies that in order to be able to remove at least one tag for each audio clip (|Γ D | ≥ 1), we can only consider for evaluation the audio clips that have at least four tags 12 . After we remove some tags, we run the four tag recommendation methods using Γ I as input and the similarity matrices we computed in the training phase.…”
Section: Prediction-based Evaluation Methodologymentioning
confidence: 99%
See 1 more Smart Citation
“…The number of tags we delete is chosen uniformly at random, with the only constraint of leaving a minimum number of input tags of |Γ I | ≥ 3 so that there is presumably enough information for the recommender systems to provide good recommendations [7]. This constraint also implies that in order to be able to remove at least one tag for each audio clip (|Γ D | ≥ 1), we can only consider for evaluation the audio clips that have at least four tags 12 . After we remove some tags, we run the four tag recommendation methods using Γ I as input and the similarity matrices we computed in the training phase.…”
Section: Prediction-based Evaluation Methodologymentioning
confidence: 99%
“…In general, tag recommendations are either based on content analysis of online resources or in the other tags that users introduce during the annotation process. In the case of content-based recommendations, a typical approach consists in, given a resource to be described, defining a neighbourhood of other resources (based on some similarity measure) and then recommending tags that are used to annotate resources in this neighbourhood [12,24]. Another approach is the use of machine learning techniques to learn mappings between tags and content features [15,25,26].…”
Section: Introductionmentioning
confidence: 99%
“…Most propagation methods are based on some form of clustering. Ivanov et al (2010) use hierarchical k-means clustering of visual features to detect similar pictures. When the user annotates an object on a new image, the system automatically propagates this annotation to similar objects from the database using a duplicate detector.…”
Section: Label Propagationmentioning
confidence: 99%
“…Former systems normally use feature extraction techniques to analyse content resources, and further training of machine learning models that can predict tags based on the extracted features (e.g., Li and Wang 2008, Turnbull et al 2008, Toderici et al 2010. Folksonomy-based systems normally take advantage of tag co-occurrence information in previously annotated resources in order to provide relevant tag recommendations for newly annotated resources (e.g., Sigurbjörnsson and Zwol 2008, Garg and Weber 2008, De Meo et al 2009, Ivanov et al 2010, Font et al 2013b).…”
Section: Introductionmentioning
confidence: 99%