Recent advances in deep-learning methods have shown extraordinary performance in road extraction from high resolution satellite imagery. However, most existing deep-learning network models yield discontinuous and incomplete results because of shadows and occlusions. To address this problem, a Dual-Attention Road extraction Network (DA-RoadNet) with a certain semantic reasoning ability is proposed. Firstly, DA-RoadNet is designed based on a shallow encoder-to-decoder network with densely connected blocks, which can minimize the loss of road structure information caused by multiple downsampling operations. Moreover, by constructing a novel attention mechanism module, the proposed network is able to explore and integrate the invisible correlations among road features with their global dependency in spatial and channel dimension respectively. Finally, considering that the proportion of road samples is small in the satellite imagery, a hybrid loss function is appended to handle class imbalance, which enables the network model to train stablely and avoid local optimum. The validation experiments using two open road datasets demonstrate that the proposed DA-RoadNet can effectively solve discontinuous problems and preserve integrity of the extracted roads, thus resulting in a higher accuracy of road extraction compared with other developed stateof-the-arts. The considerable performance on the two challenging benchmarks also proves the powerful generation ability of our method.
Feature extraction on point clouds is an essential task when analyzing and processing point clouds of 3D scenes. However, there still remains a challenge to adequately exploit local fine-grained features on point cloud data due to its irregular and unordered structure in a 3D space. To alleviate this problem, a Dilated Graph Attention-based Network (DGANet) with a certain feature for learning ability is proposed. Specifically, we first build a local dilated graph-like region for each input point to establish the long-range spatial correlation towards its corresponding neighbors, which allows the proposed network to access a wider range of geometric information of local points with their long-range dependencies. Moreover, by integrating the dilated graph attention module (DGAM) implemented by a novel offset–attention mechanism, the proposed network promises to highlight the differing ability of each edge of the constructed local graph to uniquely learn the discrepancy feature of geometric attributes between the connected point pairs. Finally, all the learned edge attention features are further aggregated, allowing the most significant geometric feature representation of local regions by the graph–attention pooling to fully extract local detailed features for each point. The validation experiments using two challenging benchmark datasets demonstrate the effectiveness and powerful generation ability of our proposed DGANet in both 3D object classification and segmentation tasks.
One major limitation of remote-sensing images is bad weather conditions, such as haze. Haze significantly reduces the accuracy of satellite image interpretation. To solve this problem, this paper proposes a novel unsupervised method to remove haze from high-resolution optical remote-sensing images. The proposed method, based on cycle generative adversarial networks, is called the edge-sharpening cycle-consistent adversarial network (ES-CCGAN). Most importantly, unlike existing methods, this approach does not require prior information; the training data are unsupervised, which mitigates the pressure of preparing the training data set. To enhance the ability to extract ground-object information, the generative network replaces a residual neural network (ResNet) with a dense convolutional network (DenseNet). The edge-sharpening loss function of the deep-learning model is designed to recover clear ground-object edges and obtain more detailed information from hazy images. In the high-frequency information extraction model, this study re-trained the Visual Geometry Group (VGG) network using remote-sensing images. Experimental results reveal that the proposed method can recover different kinds of scenes from hazy images successfully and obtain excellent color consistency. Moreover, the ability of the proposed method to obtain clear edges and rich texture feature information makes it superior to the existing methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.