Classifying remote sensing images is vital for interpreting image content. Presently, remote sensing image scene classification methods using convolutional neural networks have drawbacks, including excessive parameters and heavy calculation costs. More efficient and lightweight CNNs have fewer parameters and calculations, but their classification performance is generally weaker. We propose a more efficient and lightweight convolutional neural network method to improve classification accuracy with a small training dataset. Inspired by fine-grained visual recognition, this study introduces a bilinear convolutional neural network model for scene classification. First, the lightweight convolutional neural network, MobileNetv2, is used to extract deep and abstract image features. Each feature is then transformed into two features with two different convolutional layers. The transformed features are subjected to Hadamard product operation to obtain an enhanced bilinear feature. Finally, the bilinear feature after pooling and normalization is used for classification. Experiments are performed on three widely used datasets: UC Merced, AID, and NWPU-RESISC45. Compared with other state-of-art methods, the proposed method has fewer parameters and calculations, while achieving higher accuracy. By including feature fusion with bilinear pooling, performance and accuracy for remote scene classification can greatly improve. This could be applied to any remote sensing image classification task.
Remote sensing image scene classification is an important means for the understanding of remote sensing images. Convolutional neural networks have been successfully applied to remote sensing image scene classification and have demonstrated remarkable performance. However, with improvements in image resolution, remote sensing image categories are becoming increasingly diverse, and problems such as high intraclass diversity and high interclass similarity have arisen. The performance of ordinary convolutional neural networks at distinguishing increasingly complex remote sensing images is still limited. Therefore, we propose a feature fusion framework based on hierarchical attention and bilinear pooling called HABFNet for the scene classification of remote sensing images. First, the deep convolutional neural network ResNet50 is used to extract the deep features from different layers of the image, and these features are fused to boost their robustness and effectiveness. Second, we design an improved channel attention scheme to enhance the features from different layers. Finally, the enhanced features are cross-layer bilinearly pooled and fused, and the fused features are used for classification. Extensive experiments were conducted on three publicly available remote sensing image benchmarks. Comparisons with the state-of-the-art methods demonstrated that the proposed HABFNet achieved competitive classification performance.
Remote sensing for image object detection has numerous important applications. However, complex backgrounds and large object-scale differences pose considerable challenges in the detection task. To overcome these issues, we proposed a one-stage remote sensing image object detection model: a multi-feature information complementary detector (MFICDet). This detector contains a positive and negative feature guidance module (PNFG) and a global feature information complementary module (GFIC). Specifically, the PNFG is used to refine features that are beneficial for object detection and explore the noisy features in a complex background of abstract features. The proportion of beneficial features in the feature information stream is increased by suppressing noisy features. The GFIC uses pooling to compress the deep abstract features and improve the model’s ability to resist feature displacement and rotation. The pooling operation has the disadvantage of losing detailed feature information; thus, dilated convolution is introduced for feature complementation. Dilated convolution increases the receptive field of the model while maintaining an unchanged spatial resolution. This can improve the ability of the model to recognize long-distance dependent information and establish spatial location relationships between features. The detector proposed also improves the detection performance of objects at different scales in the same image using a dual multi-scale feature fusion strategy. Finally, classification and regression tasks are decoupled in space using a decoupled head. We experimented on the DIOR and NWPU VHR-10 datasets to demonstrate that the newly proposed MFICDet achieves competitive performance compared to current state-of-the-art detectors.
Object detection is used widely in remote sensing image interpretation. Although most models used for object detection have achieved high detection accuracy, computational complexity and low detection speeds limit their application in real-time detection tasks. This study developed an adaptive feature-aware method of object detection in remote sensing images based on the single-shot detector architecture called adaptive feature-aware detector (AFADet). Self-attention is used to extract high-level semantic information derived from deep feature maps for spatial localization of objects and the model is improved in localizing objects. The adaptive feature-aware module is used to perform adaptive cross-scale depth fusion of different-scale feature maps to improve the learning ability of the model and reduce the influence of complex backgrounds in remote sensing images. The focal loss is used during training to address the positive and negative sample imbalance problem, reduce the influence of the loss value dominated by easily classified samples, and enhance the stability of model training. Experiments are conducted on three object detection datasets, and the results are compared with those of the classical and recent object detection algorithms. The mean average precision(mAP) values are 66.12%, 95.54%, and 86.44% for three datasets, which suggests that AFADet can detect remote sensing images in real-time with high accuracy and can effectively balance detection accuracy and speed.
Scene classification is an important and challenging task employed toward understanding remote sensing images. Convolutional neural networks have been widely applied in remote sensing scene classification in recent years, boosting classification accuracy. However, with improvements in resolution, the categories of remote sensing images have become ever more fine-grained. The high intraclass diversity and interclass similarity are the main characteristics that differentiate remote scene image classification from natural image classification. To extract discriminative representation from images, we propose an endto-end feature fusion method that aggregates features from dual paths (AFDP). First, lightweight convolutional neural networks with fewer parameters and calculations are used to construct a feature extractor with dual branches. Then, in the feature fusion stage, a novel feature fusion method that integrates the concepts of bilinear pooling and feature connection is adopted to learn discriminative features from images. The AFDP method was evaluated on three public remote sensing image benchmarks. The experimental results indicate that the AFDP method outperforms current state-of-the-art methods, with advantages of simple form, strong versatility, fewer parameters, and less calculation.
Deep learning has achieved great success in remote sensing image change detection (CD). However, most methods focus only on the changed regions of images and cannot accurately identify their detailed semantic categories. In addition, most CD methods using convolutional neural networks (CNN) have difficulty capturing sufficient global information from images. To address the above issues, we propose a novel symmetric multi-task network (SMNet) that integrates global and local information for semantic change detection (SCD) in this paper. Specifically, we employ a hybrid unit consisting of pre-activated residual blocks (PR) and transformation blocks (TB) to construct the (PRTB) backbone, which obtains more abundant semantic features with local and global information from bi-temporal images. To accurately capture fine-grained changes, the multi-content fusion module (MCFM) is introduced, which effectively enhances change features by distinguishing foreground and background information in complex scenes. In the meantime, the multi-task prediction branches are adopted, and the multi-task loss function is used to jointly supervise model training to improve the performance of the network. Extensive experimental results on the challenging SECOND dataset demonstrate that our SMNet obtains 71.95% and 20.29% at mean Intersection over Union (mIoU) and Separated Kappa coefficient (Sek), respectively, which proves the effectiveness and superiority of the proposed method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.