A deep neural network is suitable for remote sensing image pixel-wise classification because it effectively extracts features from the raw data. However, remote sensing images with higher spatial resolution exhibit smaller inter-class differences and greater intra-class differences; thus, feature extraction becomes more difficult. The attention mechanism, as a method that simulates the manner in which humans comprehend and perceive images, is useful for the quick and accurate acquisition of key features. In this study, we propose a novel neural network that incorporates two kinds of attention mechanisms in its mask and trunk branches; i.e., control gate (soft) and feedback attention mechanisms, respectively, based on the branches’ primary roles. Thus, a deep neural network can be equipped with an attention mechanism to perform pixel-wise classification for very high-resolution remote sensing (VHRRS) images. The control gate attention mechanism in the mask branch is utilized to build pixel-wise masks for feature maps, to assign different priorities to different locations on different channels for feature extraction recalibration, to apply stress to the effective features, and to weaken the influence of other profitless features. The feedback attention mechanism in the trunk branch allows for the retrieval of high-level semantic features. Hence, additional aids are provided for lower layers to re-weight the focus and to re-update higher-level feature extraction in a target-oriented manner. These two attention mechanisms are fused to form a neural network module. By stacking various modules with different-scale mask branches, the network utilizes different attention-aware features under different local spatial structures. The proposed method is tested on the VHRRS images from the BJ-02, GF-02, Geoeye, and Quickbird satellites, and the influence of the network structure and the rationality of the network design are discussed. Compared with other state-of-the-art methods, our proposed method achieves competitive accuracy, thereby proving its effectiveness.
Abstract:Deep neural networks (DNNs) face many problems in the very high resolution remote sensing (VHRRS) per-pixel classification field. Among the problems is the fact that as the depth of the network increases, gradient disappearance influences classification accuracy and the corresponding increasing number of parameters to be learned increases the possibility of overfitting, especially when only a small amount of VHRRS labeled samples are acquired for training. Further, the hidden layers in DNNs are not transparent enough, which results in extracted features not being sufficiently discriminative and significant amounts of redundancy. This paper proposes a novel depth-width-reinforced DNN that solves these problems to produce better per-pixel classification results in VHRRS. In the proposed method, densely connected neural networks and internal classifiers are combined to build a deeper network and balance the network depth and performance. This strengthens the gradients, decreases negative effects from gradient disappearance as the network depth increases and enhances the transparency of hidden layers, making extracted features more discriminative and reducing the risk of overfitting. In addition, the proposed method uses multi-scale filters to create a wider neural network. The depth of the filters from each scale is controlled to decrease redundancy and the multi-scale filters enable utilization of joint spatio-spectral information and diverse local spatial structure simultaneously. Furthermore, the concept of network in network is applied to better fuse the deeper and wider designs, making the network operate more smoothly. The results of experiments conducted on BJ02, GF02, geoeye and quickbird satellite images verify the efficacy of the proposed method. The proposed method not only achieves competitive classification results but also proves that the network can continue to be robust and perform well even while the amount of labeled training samples is decreasing, which fits the small training samples situation faced by VHRRS per-pixel classification.
Abstract:Using deep learning to improve the capabilities of high-resolution satellite images has emerged recently as an important topic in automatic classification. Deep networks track hierarchical high-level features to identify objects; however, enhancing the classification accuracy from low-level features is often disregarded. We therefore proposed a two-stream deep-learning neural network strategy, with a main stream utilizing fine spatial-resolution panchromatic images to retain low-level information under a supervised residual network structure. An auxiliary line employed an unsupervised net to extract high-level abstract and discriminative features from multispectral images to supplement the spectral information in the main stream. Various feature extraction types from the neural network were selected and jointed in the novel net, as the combined high-and low-level features could provide a superior solution to image classification. In traditional convolutional neural networks, increased network depth might not influence the network performance perceptibly; however, we introduced a residual neural network to develop the expressive ability of the deeper net, increasing the role of net depth in feature extraction. To enhance feature robustness, we proposed a novel consolidation part in feature extraction. An adversarial net improved the feature extraction capabilities and aided digging the inherent and discriminative features from data, with increased extraction efficacy. Tests on satellite images indicated the high overall accuracy of our novel net, verifying that net depth or number of convolution kernels affected the classification capability. Various comparative tests proved the structural rationality for our two-stream structure.
Monitoring the cardiopulmonary signal of animals is a challenge for veterinarians in conditions when contact with a conscious animal is inconvenient, difficult, damaging, distressing or dangerous to personnel or the animal subject. In this pilot study, we demonstrate a computer vision-based system and use examples of exotic, untamed species to demonstrate this means to extract the cardiopulmonary signal. Subject animals included the following species: Giant panda (Ailuropoda melanoleuca), African lions (Panthera leo), Sumatran tiger (Panthera tigris sumatrae), koala (Phascolarctos cinereus), red kangaroo (Macropus rufus), alpaca (Vicugna pacos), little blue penguin (Eudyptula minor), Sumatran orangutan (Pongo abelii) and Hamadryas baboon (Papio hamadryas). The study was done without need for restriction, fixation, contact or disruption of the daily routine of the subjects. The pilot system extracts the signal from the abdominal-thoracic region, where cardiopulmonary activity is most likely to be visible using image sequences captured by a digital camera. The results show motion on the body surface of the subjects that is characteristic of cardiopulmonary activity and is likely to be useful to estimate physiological parameters (pulse rate and breathing rate) of animals without any physical contact. The results of the study suggest that a fully controlled study against conventional physiological monitoring equipment is ethically warranted, which may lead to a novel approach to non-contact physiological monitoring and remotely sensed health assessment of animals. The method shows promise for applications in veterinary practice, conservation and game management, animal welfare and zoological and behavioral studies.
The World Health Organization (WHO) has declared COVID-19 a pandemic. We review and reduce the clinical literature on diagnosis of COVID-19 through symptoms that might be remotely detected as of early May 2020. Vital signs associated with respiratory distress and fever, coughing, and visible infections have been reported. Fever screening by temperature monitoring is currently popular. However, improved noncontact detection is sought. Vital signs including heart rate and respiratory rate are affected by the condition. Cough, fatigue, and visible infections are also reported as common symptoms. There are non-contact methods for measuring vital signs remotely that have been shown to have acceptable accuracy, reliability, and practicality in some settings. Each has its pros and cons and may perform well in some challenges but be inadequate in others. Our review shows that visible spectrum and thermal spectrum cameras offer the best options for truly noncontact sensing of those studied to date, thermal cameras due to their potential to measure all likely symptoms on a single camera, especially temperature, and video cameras due to their availability, cost, adaptability, and compatibility. Substantial supply chain disruptions during the pandemic and the widespread nature of the problem means that cost-effectiveness and availability are important considerations.
Fast edge detection of images can be useful for many real-world applications. Edge detection is not an end application but often the first step of a computer vision application. Therefore, fast and simple edge detection techniques are important for efficient image processing. In this work, we propose a new edge detection algorithm using a combination of the wavelet transform, Shannon Entropy and thresholding. The new algorithm is based on the concept that each Wavelet decomposition level has an assumed level of structure that enables the use of Shannon entropy as a measure of global image structure. The proposed algorithm is developed mathematically and compared to five popular edge detection algorithms. The results show that our solution is low redundancy, noise resilient, and well suited to real-time image processing applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.