2017
DOI: 10.1109/tgrs.2016.2640186
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Feature Learning for Land-Use Scene Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 49 publications
(31 citation statements)
references
References 35 publications
0
31
0
Order By: Relevance
“…To guarantee comparability between the accuracy of the proposed method and those reported in works presented in [2,3,5,[7][8][9][10][11][12][13], the labeled dataset is divided into training and testing sets using a training-testing ratio of 80-20%, and five-fold cross validation is conducted. That is, the labeled image patches are almost equally divided into five non-overlapping groups randomly, with one group used as the testing set and the remaining four groups used as the training set in each fold.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…To guarantee comparability between the accuracy of the proposed method and those reported in works presented in [2,3,5,[7][8][9][10][11][12][13], the labeled dataset is divided into training and testing sets using a training-testing ratio of 80-20%, and five-fold cross validation is conducted. That is, the labeled image patches are almost equally divided into five non-overlapping groups randomly, with one group used as the testing set and the remaining four groups used as the training set in each fold.…”
Section: Methodsmentioning
confidence: 99%
“…Specifically, edge-like and corner-like bases that resemble the neuron responses of the primary visual cortex (V1) and visual extrastriate cortical area two (V2), respectively, were learnt by K-means clustering. Fan et al [11] utilized a multipath sparse coding architecture to extract dense low-level features from the raw data. The sparse features extracted from different paths were then concatenated to represent the whole image.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…BoVW [15] 71.86 SPM [19] 74.0 SPCK++ [4] 77.38 MS-based Correlaton [20] 81.32 ± 0.92 UFL [7] 81.67 ± 1.23 SG + UFL [8] 82.72 ± 1.18 UFL-SC [10] 90.26 ± 1.51 UFC + MSC [11] 91.95 ± 0.72 CCM-BoVW [21] 86.64 ± 0.81 PSR [22] 89.1 MSIFT [6] 90.97 ± 1.81 MS-CLBP + FV [56] 93.0 ± 1.2 MTJSLRC [55] 91.07 ± 0.67 VLAT [57] 94.3 MBVW [25] 96.14 OverFeat [31] 90.91 ± 1.19 CaffeNet [31] 93.42 ± 1.0 GoogLeNet + Fine-tune [53] 97.1…”
Section: Methodsmentioning
confidence: 99%
“…HRS images provide more of the appearance and spatial arrangement information needed in land-use scene category recognition [2]. It is usually difficult, however, to recognize land-use scene categories because they often comprise of multiple land covers or ground objects [3][4][5][6][7][8][9][10][11], such as airports with airplanes, runways and grass. Land-use scene categories are largely affected and determined by human social activities.…”
Section: Introductionmentioning
confidence: 99%