IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium 2018
DOI: 10.1109/igarss.2018.8519419
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Domain CNN for Hyperspectral Image Classification

Abstract: In this paper, we address the dataset scarcity issue with the hyperspectral image classification. As only a few thousands of pixels are available for training, it is difficult to effectively learn high-capacity Convolutional Neural Networks (CNNs). To cope with this problem, we propose a novel cross-domain CNN containing the shared parameters which can co-learn across multiple hyperspectral datasets. The network also contains the non-shared portions designed to handle the datasetspecific spectral characteristi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
12
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
3

Relationship

2
7

Authors

Journals

citations
Cited by 20 publications
(12 citation statements)
references
References 9 publications
0
12
0
Order By: Relevance
“…With the continuous improvement of many researchers, deeper neural networks have been proposed, such as AlexNet [17] and VGGNet [18]. Lee and Kwon [19] and Lee et al [20] developed a cross-domain convolutional neural network for a variety of hyperspectral images. After three sets of experiments, the results improved accuracy by 1% to 3% compared to the method using independent convolutional neural networks.…”
Section: Introductionmentioning
confidence: 99%
“…With the continuous improvement of many researchers, deeper neural networks have been proposed, such as AlexNet [17] and VGGNet [18]. Lee and Kwon [19] and Lee et al [20] developed a cross-domain convolutional neural network for a variety of hyperspectral images. After three sets of experiments, the results improved accuracy by 1% to 3% compared to the method using independent convolutional neural networks.…”
Section: Introductionmentioning
confidence: 99%
“…where H ×W represents the height and width of input x to the network, respectively, c is the number of classes, and y i,k andŷ i,k are the ground-truth and predicted values for the ith pixel x i and kth class among C different possible classes, respectively [30]. The combined weighted loss, L c , is defined as follows:…”
Section: Proposed Combined U-net Modelmentioning
confidence: 99%
“…In this work, the word ''semi-supervised" refers to the use of external images (from other image domains) to generate impostor patch-pairs. The proposed training scenario is inspired by some similar works on cross-domain adaptation and transfer learning that have been applied in many research works [19], [20], [21]. Therefore, our method requires nothing more than knowing the type of change to be addressed, so that the substituting images will be selected accordingly.…”
Section: Introductionmentioning
confidence: 99%