2017
DOI: 10.1109/tnnls.2017.2712793
|View full text |Cite
|
Sign up to set email alerts
|

Convolutional Sparse Autoencoders for Image Classification

Abstract: Convolutional sparse coding (CSC) can model local connections between image content and reduce the code redundancy when compared with patch-based sparse coding. However, CSC needs a complicated optimization procedure to infer the codes (i.e., feature maps). In this brief, we proposed a convolutional sparse auto-encoder (CSAE), which leverages the structure of the convolutional AE and incorporates the max-pooling to heuristically sparsify the feature maps for feature learning. Together with competition over fea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
41
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 89 publications
(41 citation statements)
references
References 19 publications
0
41
0
Order By: Relevance
“…Our approach could classify images at the highest accuracy. Although the computational complexity of our approach is much lower than deep learning methods, the accuracy of our method is higher than deep learning method reported in [24] and equal to deep learning method reported in [25], which shows advantage of our method in both reducing computational cost and increasing accuracy. [13] 67.0% 73.2% LLC [14] 65.4% 73.4% Local Pooling [26] -77.3% GLP [27] 70.3% 82.7% DASDL p [28] -75.5% N 3 SC encoder [29] 67.5% 73.9% FScSPM (Our Approach) 76.3% 84.8% Table 3.…”
Section: Caltech-101 Datasetmentioning
confidence: 80%
“…Our approach could classify images at the highest accuracy. Although the computational complexity of our approach is much lower than deep learning methods, the accuracy of our method is higher than deep learning method reported in [24] and equal to deep learning method reported in [25], which shows advantage of our method in both reducing computational cost and increasing accuracy. [13] 67.0% 73.2% LLC [14] 65.4% 73.4% Local Pooling [26] -77.3% GLP [27] 70.3% 82.7% DASDL p [28] -75.5% N 3 SC encoder [29] 67.5% 73.9% FScSPM (Our Approach) 76.3% 84.8% Table 3.…”
Section: Caltech-101 Datasetmentioning
confidence: 80%
“…The differences between the other CSC-based methods (Chen, Li, Ma, & Wei, 2016;Luo et al, 2017;Yu & Sun, 2017) and the proposed method are described below. In the method proposed by Chen et al, filters are constructed for each class using training data.…”
Section: Test Phase Of Csdrnmentioning
confidence: 99%
“…However, since the proposed method enables input features to be adaptively determined from input images, the obtained results are robust. Luo et al (2017) achieved a reduction in computational costs by using a convolutional sparse autoencoder, which is proposed in their paper. Moreover, dictionaries calculated via CSC were set to the initial filters of a two-or three-layered CNN, and they examined the use of a combination of CNN and CSC.…”
Section: Test Phase Of Csdrnmentioning
confidence: 99%
“…Convolutional sparse coding (CSC) and convolutional auto-encoders (CAEs) extend the original patch-based models to cope with multidimensional and large-sized images. Both have performed well in natural image reconstruction, denoising, and classification [16,17]. CAEs, in particular, can learn global structures, using multidimensional filters with convolutional operation, and unlike patch-based methods, they preserve the relationships between neighbourhood and spatial information.…”
Section: Introductionmentioning
confidence: 99%