2016
DOI: 10.1109/lgrs.2015.2499239
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning Earth Observation Classification Using ImageNet Pretrained Networks

Abstract: Deep learning methods such as convolutional neural networks (CNNs) can deliver highly accurate classification results when provided with large enough data sets and respective labels. However, using CNNs along with limited labeled data can be problematic, as this leads to extensive overfitting. In this letter, we propose a novel method by considering a pretrained CNN designed for tackling an entirely different classification problem, namely, the ImageNet challenge, and exploit it to extract an initial set of re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
295
0
1

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 554 publications
(298 citation statements)
references
References 9 publications
2
295
0
1
Order By: Relevance
“…Several deep architectures [Deng, 2014, Schmidhuber, 2015 have been employed, with the Deep Belief Networks, Autoencoders, Convolutional Neural Networks and Deep Boltzmann Machines being some of the most commonly used in the literature for a variety of problems. In particular, for the classification of remote sensing data certain deep architectures have provided highly accurate results [Mnih and Hinton, 2010, Chen et al, 2014, Vakalopoulou et al, 2015, Basu et al, 2015, Makantasis et al, 2015, Marmanis et al, 2016.…”
Section: Classmentioning
confidence: 99%
See 1 more Smart Citation
“…Several deep architectures [Deng, 2014, Schmidhuber, 2015 have been employed, with the Deep Belief Networks, Autoencoders, Convolutional Neural Networks and Deep Boltzmann Machines being some of the most commonly used in the literature for a variety of problems. In particular, for the classification of remote sensing data certain deep architectures have provided highly accurate results [Mnih and Hinton, 2010, Chen et al, 2014, Vakalopoulou et al, 2015, Basu et al, 2015, Makantasis et al, 2015, Marmanis et al, 2016.…”
Section: Classmentioning
confidence: 99%
“…Similar to [Vakalopoulou et al, 2015, Marmanis et al, 2016] the already pretrained AlexNet network [Krizhevsky et al, 2012] has been employed, here, for feature extraction. In particular, features from the last layer (FC7) were extracted using two spectral band combinations (red-green-blue and NIR-red-green).…”
Section: Alexnet-pretrained Networkmentioning
confidence: 99%
“…Pretrained model for CNNs like Alexnet (Krizhevsky and Sutskever, 2010), VGGNet or GoogleNet (Szegedy and Liu, 2015) that have been trained on large dataset such as ImageNet can be used for other visual recognition tasks without any needing to train the first few layers. Such a property is very useful for classification tasks in remote sensing, where the acquisition of large sets of training data need a lot of effort and cost (Marmanis et al, 2016). In addition to fine-tuning a pretrained model for a new classification task, a pre-trained CNN can be treated as fixed feature extractor.…”
Section: Cnn and Pre-trained Modelsmentioning
confidence: 99%
“…As training of CNNs, from scratch needs large datasets with labels which are hard to obtain in the remote sensing community, using a pre-trained CNN models is suggested. Marmanis et al, (2016) used a pre-trained CNN model which was pre-trained on ImageNet dataset (Krizhevsky, et al, 2012) and successfully achieved good results in the classification of remote sensing datasets. To select the roof types, two methodologies are utilized for classification of roof patches.…”
Section: Introductionmentioning
confidence: 99%
“…The results showed that fine-tuning tends to be the best-performing strategy. In [31], Marmanis et al proposed a two-stage framework for earth observation classification. In the first stage, an initial set of representations is extracted by using a pre-trained CNN, namely ImageNet.…”
Section: Introductionmentioning
confidence: 99%