2015
DOI: 10.1016/j.neucom.2014.08.027
|View full text |Cite
|
Sign up to set email alerts
|

Image annotation based on feature fusion and semantic similarity

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(13 citation statements)
references
References 21 publications
0
13
0
Order By: Relevance
“…Therefore, we display all of the annotation outcomes based on the five categories. We make comparisons between our learning transfer model based on the label localization strategy and the traditional image annotation method based on multifeature fusion and semantic similarity in [ 27 ] and the Gaussian mixture model that considers cross-modal correlations in [ 28 ] (GMM-MB). We also conducted experimental comparisons in terms of precision between the feature fusion model proposed in [ 29 ] and the semantic extension model (SEM) proposed in [ 30 ], which use a CNN to extract features.…”
Section: Resultsmentioning
confidence: 99%
“…Therefore, we display all of the annotation outcomes based on the five categories. We make comparisons between our learning transfer model based on the label localization strategy and the traditional image annotation method based on multifeature fusion and semantic similarity in [ 27 ] and the Gaussian mixture model that considers cross-modal correlations in [ 28 ] (GMM-MB). We also conducted experimental comparisons in terms of precision between the feature fusion model proposed in [ 29 ] and the semantic extension model (SEM) proposed in [ 30 ], which use a CNN to extract features.…”
Section: Resultsmentioning
confidence: 99%
“…Table 1, shows mean precision, recall and F 1 for the proposed methodology and the appropriate annotation recently presented three methods (IAGA -2014 [20], Feature fusion and semantic similarity-2014 [5], MLRank -2013 [21]). More areas have higher density than other areas in each image by weighting the edges of the graph.…”
Section: Experimental Results and Evaluationmentioning
confidence: 99%
“…[1] Great deal of research has been done in the field of image annotation that can be grouped into three models: probabilistic models, model-based on categories and models based on the nearest neighborhood. [5] Most probabilistic models [6,7,8,9] joint probability estimate on the image content and keywords. Model-based categories [10,11], the image annotation to be an issue with the supervisor behave category.…”
Section: Introductionmentioning
confidence: 99%
“…Saliency can than be determined by ranking these nodes based on their similarities to background and foreground queries. In [21], a multi-feature fusion method was developed based on semantic similarity for image annotation.…”
Section: Introductionmentioning
confidence: 99%