2009 IEEE 12th International Conference on Computer Vision 2009
DOI: 10.1109/iccv.2009.5459180
|View full text |Cite
|
Sign up to set email alerts
|

I know what you did last summer: object-level auto-annotation of holiday snaps

Abstract: The state-of-the art in visual object retrieval from large databases allows to search millions of images on the object level. Recently, complementary works have proposed systems to crawl large object databases from community photo collections on the Internet. We combine these two lines of work to a large-scale system for auto-annotation of holiday snaps. The resulting method allows for automatic labeling objects such as landmark buildings, scenes, pieces of art etc. at the object level in a fully automatic man… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
109
0
7

Year Published

2010
2010
2018
2018

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 80 publications
(116 citation statements)
references
References 23 publications
0
109
0
7
Order By: Relevance
“…Finally, Wikipedia 8 articles are attached to the images and the validity of these associations is checked. Gammeter et al [9] extends this idea towards object-based auto-annotation of holiday photos in a large database that includes landmark buildings, statues, scenes, pieces of art, with help of external resources such as Wikipedia. In both [20] and [9], GPS coordinates are used to pre-cluster objects which may not be always available.…”
Section: Combined Analysis Of Geographical Context and Visual Contentmentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, Wikipedia 8 articles are attached to the images and the validity of these associations is checked. Gammeter et al [9] extends this idea towards object-based auto-annotation of holiday photos in a large database that includes landmark buildings, statues, scenes, pieces of art, with help of external resources such as Wikipedia. In both [20] and [9], GPS coordinates are used to pre-cluster objects which may not be always available.…”
Section: Combined Analysis Of Geographical Context and Visual Contentmentioning
confidence: 99%
“…Massa and Avesani [17] compared and evaluated user trust models on their webbased application presented online at Epinions. 9 They considered two kinds of user trust models, global and local trust. The global trust model assigns a trust value to each user, independently of who is evaluating the other users.…”
Section: User Trust Modelingmentioning
confidence: 99%
“…Finally, Wikipedia 8 articles are attached to the images, and the validity of these associations is checked. Gammeter et al [6] extend this idea toward objectbased auto-annotation of holiday photos in a large database that includes landmark buildings, statues, scenes, and pieces of art, with the help of external resources such as Wikipedia. In both [33] and [6], GPS coordinates are used to pre-cluster objects which may not be always available.…”
Section: Geotagging In Social Network and Sharing Websitesmentioning
confidence: 99%
“…The authors perform clustering on GPS coordinates and visual texture features from the image pool and extract landmark names as the most frequent tags associated with the particular visual cluster. Additionally, they extract landmark names from the travel guide articles, such as Wikitravel, 6 and visually cluster photos gathered by querying Google Images. 7 However, the test set they use is quite limited -728 images in total for a 124-category problem or less than six test images per landmark.…”
Section: Geotagging In Social Network and Sharing Websitesmentioning
confidence: 99%
“…They handle tags by a modified TF-IDF ranking and link their results to Wikipedia 2 . Gammeter et al [4] overlay a geospatial grid over earth and match pairwise retrieved photos of each tile using visual features. Then cluster photos into groups of images depicting the same scene.…”
Section: A Using Both Visual Descriptions and Textual Metadatamentioning
confidence: 99%