2017
DOI: 10.1016/j.inffus.2016.12.010
|View full text |Cite
|
Sign up to set email alerts
|

Visual feature coding based on heterogeneous structure fusion for image classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
8
1

Relationship

2
7

Authors

Journals

citations
Cited by 25 publications
(15 citation statements)
references
References 29 publications
0
15
0
Order By: Relevance
“…Lin et al . [ 49 ] proposed a local visual feature coding based on heterogeneous structure fusion to overcome the limitation of capturing intrinsic invariance in intra-class images or image structure for large variability image classification. Our methods provides 3.18% higher accuracy compared to their approach.…”
Section: Experimental Datasets and Resultsmentioning
confidence: 99%
“…Lin et al . [ 49 ] proposed a local visual feature coding based on heterogeneous structure fusion to overcome the limitation of capturing intrinsic invariance in intra-class images or image structure for large variability image classification. Our methods provides 3.18% higher accuracy compared to their approach.…”
Section: Experimental Datasets and Resultsmentioning
confidence: 99%
“…The second kind of method attempts to mine the nonlinear relationship of the heterogeneous feature structure based on the global feature [10,11] or the local feature encoding [12] for finding the complementary of the different structures in multiple representations. These methods can bridge the gap between heterogeneous structures for the uniform fusion feature representation.…”
Section: Structure Fusionmentioning
confidence: 99%
“…to discriminate the different data objects.The recent learning methods for mining multi-graph structure information mainly have tow categories. One is structure fusion [7][8][9][10][11][12][13][14][15][16][17] or diffusion on the tensor product graph [18-23] based on the complete data, which include each view observation datum. Another is graph convolutional networks for the salient graph structure preservation [6] or node information fusion [24,25] based on incomplete data, which lacks some view observation data.…”
mentioning
confidence: 99%
“…HVFC-HSF [30] 70.7% 78.7% CLGC(RGB-RGB) [31] -72.6% CSAE [24] 64.0% 71.4% Hybrid-CNN [25] -84.8% FScSPM (Our Approach) 76.3% 84.8%…”
Section: Algorithms 15 Training 30 Trainingmentioning
confidence: 99%