“…GETNET [6] can obtain the second best performance on Barbara dataset, but it cannot get satisfactory accuracy on Bay data. By contrast, TDSSC [20] can achieve relatively stable accuracy on these two datasets as it captures more robust feature representation by fusing the features of spectral direction and two spatial directions. For the proposed ASSCDN, spectral and spatial features are fused adaptively for different patches, which is helpful to obtain more reliable detection results.…”
Section: Results and Comparison On Barbara And Bay Datasetsmentioning
confidence: 99%
“…In this section, we tested the performance of the proposed ASSCDN on three real public available HSI datasets. Moreover, to verify the superiority of the proposed ASS-CDN, eight approaches are selected for comparison, including four widely used methods: CVA [61], KNN, SVM, and RCVA [39], and four deep learning-based methods: DCVA [50], DSFA [51], GETNET [6], and TDSSC [20]. Furthermore, five metrics (OA, KC, F1, PRE, and REC) are exploited to evaluate the accuracy of the proposed ASSCDN and the compared methods.…”
Section: Comparison Results and Analysismentioning
confidence: 99%
“…Moreover, we employed a patch size of 15, and the number of the training samples of 250 to perform the proposed ASSCDN on these three datasets. In addition, to ensure the fairness of comparison, GETNET [6], and TDSSC [20] are deployed under the same semi-supervised learning framework as the proposed ASSCDN.…”
Section: Comparison Results and Analysismentioning
confidence: 99%
“…This method introduces a unmixing-based subpixel representation to fuse multi-source information for HSI CD. (8) TDSSC, which can capture representative spectral-spatial features by concatenating the feature of spectral direction and two spatial directions, and thus improving detection performance [20].…”
Section: Comparative Methodsmentioning
confidence: 99%
“…A recurrent 3D fully convolutional networks is designed to capture spectral-spatial features of HSIs simultaneously for CD in [12]. Zhan et al promoted a three-directions spectral-spatial convolution neural network (TDSSC) in [20], which can capture representative spectra-spatial features by concatenating the feature of spectral direction and two spatial directions, and thus improving detection performance. Such methods are usually weighted to equalize spatial and spectral features to conduct joint analysis and classification, and have achieved good performance, but they usually have the following common problems:…”
Joint analysis of spatial and spectral features has always been an important method for change detection in hyperspectral images. However, many existing methods cannot extract effective spatial features from the data itself. Moreover, when combining spatial and spectral features, a rough uniform global combination ratio is usually required. To address these problems, in this paper, we propose a novel attention-based spatial and spectral network with PCA-guided self-supervised feature extraction mechanism to detect changes in hyperspectral images. The whole framework is divided into two steps. First, a self-supervised mapping from each patch of the difference map to the principal components of the central pixel of each patch is established. By using the multi-layer convolutional neural network, the main spatial features of differences can be extracted. In the second step, the attention mechanism is introduced. Specifically, the weighting factor between the spatial and spectral features of each pixel is adaptively calculated from the concatenated spatial and spectral features. Then, the calculated factor is applied proportionally to the corresponding features. Finally, by the joint analysis of the weighted spatial and spectral features, the change status of pixels in different positions can be obtained. Experimental results on several real hyperspectral change detection data sets show the effectiveness and advancement of the proposed method.
“…GETNET [6] can obtain the second best performance on Barbara dataset, but it cannot get satisfactory accuracy on Bay data. By contrast, TDSSC [20] can achieve relatively stable accuracy on these two datasets as it captures more robust feature representation by fusing the features of spectral direction and two spatial directions. For the proposed ASSCDN, spectral and spatial features are fused adaptively for different patches, which is helpful to obtain more reliable detection results.…”
Section: Results and Comparison On Barbara And Bay Datasetsmentioning
confidence: 99%
“…In this section, we tested the performance of the proposed ASSCDN on three real public available HSI datasets. Moreover, to verify the superiority of the proposed ASS-CDN, eight approaches are selected for comparison, including four widely used methods: CVA [61], KNN, SVM, and RCVA [39], and four deep learning-based methods: DCVA [50], DSFA [51], GETNET [6], and TDSSC [20]. Furthermore, five metrics (OA, KC, F1, PRE, and REC) are exploited to evaluate the accuracy of the proposed ASSCDN and the compared methods.…”
Section: Comparison Results and Analysismentioning
confidence: 99%
“…Moreover, we employed a patch size of 15, and the number of the training samples of 250 to perform the proposed ASSCDN on these three datasets. In addition, to ensure the fairness of comparison, GETNET [6], and TDSSC [20] are deployed under the same semi-supervised learning framework as the proposed ASSCDN.…”
Section: Comparison Results and Analysismentioning
confidence: 99%
“…This method introduces a unmixing-based subpixel representation to fuse multi-source information for HSI CD. (8) TDSSC, which can capture representative spectral-spatial features by concatenating the feature of spectral direction and two spatial directions, and thus improving detection performance [20].…”
Section: Comparative Methodsmentioning
confidence: 99%
“…A recurrent 3D fully convolutional networks is designed to capture spectral-spatial features of HSIs simultaneously for CD in [12]. Zhan et al promoted a three-directions spectral-spatial convolution neural network (TDSSC) in [20], which can capture representative spectra-spatial features by concatenating the feature of spectral direction and two spatial directions, and thus improving detection performance. Such methods are usually weighted to equalize spatial and spectral features to conduct joint analysis and classification, and have achieved good performance, but they usually have the following common problems:…”
Joint analysis of spatial and spectral features has always been an important method for change detection in hyperspectral images. However, many existing methods cannot extract effective spatial features from the data itself. Moreover, when combining spatial and spectral features, a rough uniform global combination ratio is usually required. To address these problems, in this paper, we propose a novel attention-based spatial and spectral network with PCA-guided self-supervised feature extraction mechanism to detect changes in hyperspectral images. The whole framework is divided into two steps. First, a self-supervised mapping from each patch of the difference map to the principal components of the central pixel of each patch is established. By using the multi-layer convolutional neural network, the main spatial features of differences can be extracted. In the second step, the attention mechanism is introduced. Specifically, the weighting factor between the spatial and spectral features of each pixel is adaptively calculated from the concatenated spatial and spectral features. Then, the calculated factor is applied proportionally to the corresponding features. Finally, by the joint analysis of the weighted spatial and spectral features, the change status of pixels in different positions can be obtained. Experimental results on several real hyperspectral change detection data sets show the effectiveness and advancement of the proposed method.
Recently, ground coverings change detection (CD) driven by bitemporal hyperspectral images (HSIs) has become a hot topic in the remote sensing community. There are two challenges in the HSI‐CD task: (1) attribute feature representation of pixel pairs and (2) feature extraction of attribute patterns of pixel pairs. To solve the above problems, a novel spectral‐spatial sequence characteristics‐based convolutional transformer (S3C‐CT) method is proposed for the HSI‐CD task. In the designed method, firstly, an eigenvalue extrema‐based band selection strategy is introduced to pick up spectral information with salient attribute patterns. Then, a 3D tensor with spectral‐spatial sequence characteristics is proposed to represent the attribute features of pixel pairs in the bitemporal HSIs. Next, a fusion framework of the convolutional neural network (CNN) and Transformer encoder (TE) is designed to extract high‐order sequence semantic features, taking into account both local context information and global sequence dependencies. Specifically, a spatial‐spectral attention mechanism is employed to prevent information reduction and enhance dimensional interactivity between the CNN and TE. Finally, the binary change map is determined according to the fully‐connected layer. Experimental results on real HSI datasets indicated that the proposed S3C‐CT method outperforms other well‐known and state‐of‐the‐art detection approaches in terms of detection performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.