2017
DOI: 10.1016/j.patcog.2017.05.019
|View full text |Cite
|
Sign up to set email alerts
|

Automatic image annotation via label transfer in the semantic space

Abstract: Automatic image annotation is among the fundamental problems in computer vision and pattern recognition, and it is becoming increasingly important in order to develop algorithms that are able to search and browse large-scale image collections. In this paper, we propose a label propagation framework based on Kernel Canonical Correlation Analysis (KCCA), which builds a latent semantic space where correlation of visual and textual features are well preserved into a semantic embedding. The proposed approach is rob… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
49
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 78 publications
(49 citation statements)
references
References 53 publications
0
49
0
Order By: Relevance
“…Content-Based Image Retrieval approach automatically retrieves and index , 0 (2019) MATEC Web of Conferences https://doi.org/10.1051/matecconf/20192 255 1003 5501003 EAAI Conference 2018 different low-level features (colour, shape and texture) [5][6]. The need for large-scale image dataset annotation introduced the concept of Automatically Image Annotation (AIA) [7][8][9][10]. The AIA technique contains the good characteristics (advantages) from both traditional (text based and CBIR) annotated techniques through the keyword searching based on image content.…”
Section: Text-based Approach 2 Content-based Image Retrieval (Cbir) mentioning
confidence: 99%
“…Content-Based Image Retrieval approach automatically retrieves and index , 0 (2019) MATEC Web of Conferences https://doi.org/10.1051/matecconf/20192 255 1003 5501003 EAAI Conference 2018 different low-level features (colour, shape and texture) [5][6]. The need for large-scale image dataset annotation introduced the concept of Automatically Image Annotation (AIA) [7][8][9][10]. The AIA technique contains the good characteristics (advantages) from both traditional (text based and CBIR) annotated techniques through the keyword searching based on image content.…”
Section: Text-based Approach 2 Content-based Image Retrieval (Cbir) mentioning
confidence: 99%
“…In practice, it can increase the training intensity of low-frequency keywords for image samples in order to enhance the generalization performance of the whole model. For addressing this problem, we introduce an ANN based Auto-encoder method, which is used for unsupervised learning [2], [3]. The aim of an auto-encoder is to learn a representation for a set of data, typically for the purpose of dimensionality reduction.…”
Section: ) Balanced/skewed Distribution Abstractmentioning
confidence: 99%
“…In the AIA, the main idea is to automatically learn the semantic concept models from a huge number of image samples and utilize the conceptual models to label new images with proper tags [1]. The AIA has a lot of applications in various fields including access, search, and navigate the huge amount of visual data which stored in online or offline data sources, image manipulation and annotation application that used on a mobile device, [2]- [4]. The typical image annotation approaches rely on human viewpoints and the performance of them is highly dependent on the inefficient manual operations.…”
Section: Introductionmentioning
confidence: 99%
“…And we use NDCG score to evaluate the performance of tag ranking compared to the baseline methods. Normalized Discounted Cumulative Gains (NDCG) [18] is a widely used measurement of tag ranking. The NDCG score is computed as (11), where ( ) is the relevance level of the i-th tag, K indicates that NDCG scores are calculated using the top K ranked tags, and z is a normalization constant ensuring the cumulative gain of NDCG score is 1.…”
Section: Tag Rankingmentioning
confidence: 99%
“…; [1,[13][14][15][16][17][18] study the tags relevance by textual information, for example [1,13,17,18] calculate the tags concurrence probability and analyze the semantical relevance between tags, [14] analyzes the ontological relationship for tags, [1,15] assume that the important tags' positions are prior to those irrelevant ones; [19,20] argue that tags are content-related to the image, and after comparing to other 3 factors, [19] concludes that the image content is the key factor for tag relevance learning.…”
Section: Introductionmentioning
confidence: 99%