Recent developments in Convolutional Neural Networks (CNNs) have allowed for the achievement of solid advances in semantic segmentation of high-resolution remote sensing (HRRS) images. Nevertheless, the problems of poor classification of small objects and unclear boundaries caused by the characteristics of the HRRS image data have not been fully considered by previous works. To tackle these challenging problems, we propose an improved semantic segmentation neural network, which adopts dilated convolution, a fully connected (FC) fusion path and pre-trained encoder for the semantic segmentation task of HRRS imagery. The network is built with the computationally-efficient DeepLabv3 architecture, with added Augmented Atrous Spatial Pyramid Pool and FC Fusion Path layers. Dilated convolution enlarges the receptive field of feature points without decreasing the feature map resolution. The improved neural network architecture enhances HRRS image segmentation, reaching the classification accuracy of 91%, and the precision of recognition of small objects is improved. The applicability of the improved model to the remote sensing image segmentation task is verified.
Building footprint extraction from high-resolution aerial images is always an essential part of urban dynamic monitoring, planning, and management. It has also been a challenging task in remote sensing research. In recent years, deep neural networks have made great achievement in improving the accuracy of building extraction from remote sensing imagery. However, most of the existing approaches usually require a large amount of parameters and floating point operations for high accuracy, it leads to high memory consumption and low inference speed which are harmful to research. In this paper, we proposed a novel efficient network named ESFNet which employs separable factorized residual block and utilizes the dilated convolutions, aiming to preserve slight accuracy loss with low computational cost and memory consumption. Our ESFNet obtains a better trade-off between accuracy and efficiency, it can run at over 100 FPS on single Tesla V100, requires 6x fewer FLOPs and has 18x fewer parameters than state-of-theart real-time architecture ERFNet while preserving similar accuracy without any additional context module, post-processing and pre-trained scheme. We evaluated our networks on WHU building dataset and compared it with other state-of-the-art architectures. The result and comprehensive analysis show that our networks are benefit for efficient remote sensing researches, and the idea can be further extended to other areas.
Accurate and reliable forestry data can be obtained by means of continuous monitoring of forests using advanced technologies, which provides a major opportunity for the development of smart forestry. However, with the improvement of the precision and acquisition speed of data, the traditional data analysis, and storage technology cannot meet the performance requirements of current applications. Forestry big data has brought a new solution to the difficulties encountered in the course of forestry development, which refers to the application of big data technology to forestry data processing. In this paper, we summarize the research and work of the big data in smart forestry in recent years. First, we review the history of the emergence and development of forestry big data, and then briefly summarize the opportunities brought to the forestry by big data technology. One of the most important tasks of forestry big data is to organize the massive data reasonably and effectively and to calculate fast. Therefore, we propose a five-layer architecture model of forestry big data and summarize the related work of data storage, query, analysis, and application. Finally, the challenges of forestry big data are analyzed, and the trend of future development has prospected from three aspects.
Aspect-level sentiment analysis is a fine-grained sentiment analysis task designed to identify the sentiment polarity of specific target in a sentence. However, this task is rarely used in drug reviews. Some models for this task ignore the impact of target semantics, and others do not perform well because the datasets are relatively smaller. Therefore, we propose a Pretraining and Multi-task learning model based on Double BiGRU (PM-DBiGRU). In PM-DBiGRU, we first use the pretrained weight learned from short textlevel drug review sentiment classification task to initialize related weight of our model. Then two BiGRU networks are applied to generate the bidirectional semantic representations of the target and drug review, and attention mechanism is used to obtain the target-specific representation for aspect-level drug review. The multi-task learning is further utilized to transfer the helpful domain knowledge from the short text-level drug review corpus. We also propose a dataset SentiDrugs for aspect-level drug review sentiment classification, in which each review may contain one or more targets. Experimental results on SentiDrugs demonstrate that our approach can improve the performance of aspect-level drug reviews sentiment classification compared with other state-of-the-art architectures. INDEX TERMS Aspect-level, drug reviews, double BiGRU, pretraining, multi-task learning.
High-resolution remote sensing images are abundant in texture information, and the detection method of the change of pixel-level mainly analyzes the spectral information of the image, which has certain limitations. In this paper, a high-resolution remote sensing image change detection method combining pixel and object levels is proposed to solve the problem that many pepper and salt phenomenon and false detection in the change detection of pixel-level and object-level change detection method are cumbersome for image segmentation process. We integrate the multi-dimensional features of high-resolution remote sensing images and use random forest classifiers to classify to obtain the pixel-level change detection results. Then, we use the improved U-net network to semantically segment the post-phase remote sensing image to obtain the image object segmentation result. Finally, the consequences of pixel-level change detection and image object segmentation result are fused to obtain the image changing area and the unchanging area. The experimental results demonstrate that the algorithm has a higher accuracy rate and detection precision.
In the era of edge computing, real-time data preprocessing on the edge node has the potential to improve computational efficiency and data accuracy. However, a significant challenge is private data disclosure, particularly in the case of location-based services. To address this challenge, in this paper, by leveraging differential privacy, we propose a privacy-aware framework for mobile edge computing called MEPA to protect the location privacy in which the edge node is regarded as an anonymous central server. The proposed framework can provide computing services without deploying special infrastructure. To be specific, in order to solve the problem of constrained computing resources in the edge nodes, the algorithm of Quadtree Differential Privacy based on Hilbert curve division (QTDP-H) two-dimensional spatial data query transmission is proposed.First, a noise quadtree is established and the privacy budget is divided according to the tree level.Then, the constructed quadtree is represented by quanternary, so that the partition based on Hilbert curve can be established and the two-dimensional data in the area can be converted into one-dimensional, which can greatly improve the retrieval efficiency. The effectiveness of the proposed algorithm in terms of time complexity and retrieval accuracy has been verified by extensive experimental results. Compared with traditional methods of (D, ) − LP, the average runtime can be reduced by 15%-20%, and the average relative error is reduced by 20%. KEYWORDS differential privacy, Hilbert curve, location-based service, mobile edge computing, privacy aware, quadtree INTRODUCTIONWith the development of the Internet of Things (IoT) and cloud computing, the amount of data on the edge network is rapidly growing. Therefore, it is more efficient to process the data at the edge of the network. However, the development of network bandwidth is slower compared with the powerful computing ability of cloud services. The amount of data is growing rapidly, and time consumption in data transmission has become the main challenge that restricts the cloud computing applications. In cloud computing model, the devices at the edge often only act as consumers; however, people often generate data from the devices they use. 1 This shift from data consumers to data consumers/producers requires more functionality on the edge node. However, at the edge of the network, user privacy and data security are among the most important requirements. If IoT is deployed in the home, some privacy information can be obtained from the user data, such as by reading user electric meter and water meter data to determine whether there are people in the room. By obtaining location data, people can estimate someone's home address, lifestyle, social relationships, and more. Therefore, the disclosure of this personal information to attackers can pose a serious threat to the privacy of users. The development of edge computing will promote a variety of intelligent applications, which were impractical in the past due to network ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.