2018
DOI: 10.1049/iet-ipr.2017.0917
|View full text |Cite
|
Sign up to set email alerts
|

Image region annotation based on segmentation and semantic correlation analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(14 citation statements)
references
References 43 publications
(58 reference statements)
0
11
0
Order By: Relevance
“…Semantic ambiguity can be resolved by identifying the various regions in image where multiple labels assigned, which is the motivation to introduced tri-relational graph. In tri-relational graph annotation method, image is divided in different region and set of various region T is prepared [26,31]. Set of semantic labels C and Image set X is prepared.…”
Section: Traditional Graph Verses Tri-relational Graph Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Semantic ambiguity can be resolved by identifying the various regions in image where multiple labels assigned, which is the motivation to introduced tri-relational graph. In tri-relational graph annotation method, image is divided in different region and set of various region T is prepared [26,31]. Set of semantic labels C and Image set X is prepared.…”
Section: Traditional Graph Verses Tri-relational Graph Learningmentioning
confidence: 99%
“…To generate Image graph, visual similarity of all the regions from are calculated and compared. Looking towards the concept the segmentation of the image is important task in TG which is achieved through Texture-enhanced JSEG algorithm which is depend on regional latent semantic dependency [31]. For correct relatively independent segmentation, texture and colour class map are combined by texture-enhanced segmentation (TJSEG).…”
Section: Fig2mentioning
confidence: 99%
“…Authors in [54] consider undirected graphical models that jointly exploit low-level features and contextual information (as concept co-occurrences and spatial correlation statistics) to classify local image blocks into predefined concepts. Zhang et al [56] propose a region annotation framework that exploits the semantic correlation of segmented image regions; this method assigns each segmented region to one concept and learns the relationships between labels and region locations using PSA. A hybrid annotation approach based on visual attention mechanism and conditional random fields is proposed in [57] in order to pay more attention to the salient regions during the annotation process.…”
Section: Context Modelingmentioning
confidence: 99%
“…Image annotation is one of the major challenges in computer vision which aims at assigning keywords (a.k.a labels or concepts) to images. The difficulty in image annotation stems from the extreme variability of the learned concepts and their versatile content which is usually described with handcrafted or learned representations [1][2][3][4][5][6][7][8][9][10][11]. However, due to its limited representational power, content is usually upgraded with context in order to capture both the intrinsic and the extrin-This work was supported by a grant from the National Natural Science Foundation of China (No.…”
Section: Introductionmentioning
confidence: 99%