“…After obtaining the aerial photogrammetry data, further processing was conducted. A project was created in DP‐smart software, and the pre‐processed aerial images, POS (position and orientation system) results, image control results and other data were imported for aerial triangulation (Jiang et al., 2018). The acquisition process was as follows.…”
Unmanned aircraft vehicles (UAVs) capture oblique point clouds in outdoor scenes that contain considerable building information. Building features extracted from images are affected by the viewing point, illumination, occlusion, noise and image conditions, which make building features difficult to extract. Currently, ground elevation changes can provide powerful aids for the extraction, and point cloud data can precisely reflect this information. Thus, oblique photogrammetry point clouds have significant research implications. Traditional building extraction methods involve the filtering and sorting of raw data to separate buildings, which cause the point clouds to lose spatial information and reduce the building extraction accuracy. Therefore, we develop an intelligent building extraction method based on deep learning that incorporates an attention mechanism module into the Samling and PointNet operations within the set abstraction layer of the PointNet++ network. To assess the efficacy of our approach, we train and extract buildings from a dataset created using UAV oblique point clouds from five regions in the city of Bengbu, China. Impressive performance metrics are achieved, including 95.7% intersection over union, 96.5% accuracy, 96.5% precision, 98.7% recall and 97.8% F1 score. And with the addition of attention mechanism, the overall training accuracy of the model is improved by about 3%. This method showcases potential for advancing the accuracy and efficiency of digital urbanization construction projects.
“…After obtaining the aerial photogrammetry data, further processing was conducted. A project was created in DP‐smart software, and the pre‐processed aerial images, POS (position and orientation system) results, image control results and other data were imported for aerial triangulation (Jiang et al., 2018). The acquisition process was as follows.…”
Unmanned aircraft vehicles (UAVs) capture oblique point clouds in outdoor scenes that contain considerable building information. Building features extracted from images are affected by the viewing point, illumination, occlusion, noise and image conditions, which make building features difficult to extract. Currently, ground elevation changes can provide powerful aids for the extraction, and point cloud data can precisely reflect this information. Thus, oblique photogrammetry point clouds have significant research implications. Traditional building extraction methods involve the filtering and sorting of raw data to separate buildings, which cause the point clouds to lose spatial information and reduce the building extraction accuracy. Therefore, we develop an intelligent building extraction method based on deep learning that incorporates an attention mechanism module into the Samling and PointNet operations within the set abstraction layer of the PointNet++ network. To assess the efficacy of our approach, we train and extract buildings from a dataset created using UAV oblique point clouds from five regions in the city of Bengbu, China. Impressive performance metrics are achieved, including 95.7% intersection over union, 96.5% accuracy, 96.5% precision, 98.7% recall and 97.8% F1 score. And with the addition of attention mechanism, the overall training accuracy of the model is improved by about 3%. This method showcases potential for advancing the accuracy and efficiency of digital urbanization construction projects.
“…Wan is with the Faculty of Foreign Languages, Southwest Jiaotong University, Chengdu 611756, China (e-mail: wanqian@my.swjtu.edu.cn) especially with the proposal and development of 'Smart Cities', urban structures have been expanding both horizontally and vertically [1]- [3]. Building height is a crucial metric for vertical development [4], enabling various urban development applications, such as urban development assessment [5], [6], digital city construction [7], [8], and 3D urban form analysis [9]- [11]. Furthermore, the development of remote sensing technology provides a more convenient method for obtaining surface information, and how to accurately obtain building height from remote sensing images constitutes a crucial component in the field of remote sensing research.…”
Extracting building heights from single-view remote sensing images greatly enhances the application of remote sensing data. While methods for extracting building height from single-view shadow images have been widely studied, it remains a challenging task. The main reasons are as follows: (1) The traditional method for extracting shadow information exhibits low accuracy. (2) The use of only shadow information to extract building height results in limited application scenarios. To solve the above problems, this paper introduces building side and shadow information to complement each other, and proposes a building height extraction method from high-resolution single-view remote sensing images using shadow and side information. Firstly, we propose the RMU-Net method, which utilizes multi-scale features for the extraction of shadow and side information. This method aims to address issues related to pixel detail loss and imprecise edge segmentation, which result from significant scale differences within segmentation targets. Additionally, we employ the area threshold method to optimize the segmentation results, specifically to tackle small stray patches and holes, enhancing the overall integrity and accuracy of shadow and side information extraction. Secondly, we propose a method for building height extraction that integrates shadow and side information based on an enhanced proportional coefficient model. The accuracy of measuring building side and shadow lengths is improved by incorporating the fishing net method, informed by our analysis of the geometric relationships among buildings. Finally, we establish a dataset containing building shadow and side information from remote sensing images, and select multiple areas for experimental analysis. The results demonstrate a shadow extraction accuracy of 91.03% and a side extraction accuracy of 90.29%. Additionally, the average absolute error (MAE) for building height extraction is 1.22, while the average root mean square error (RMSE) is 1.21. Furthermore, the proposed method's validity and scalability are affirmed through experimental analyses of applicability and anti-interference performance in extensive areas.
“…e marketization of information collection is done by attracting specialized information collection service companies to engage in information collection services, and information collection service companies are more professional in personnel management, business operations, and technical support, and more standardized now, a large number of wireless sensor networks for information collection or target monitoring and other tasks are completed through their own sensor nodes [3]. Due to the huge number of nodes, all nodes are used to jointly transmit data to the sink node, resulting in a large number of redundant information, which will cause a lot of waste of communication bandwidth, make some valuable resources not fully utilized, greatly reduce the efficiency of information collection, and affect the real-time operation of information [4,5]. For the above description problem, people began to use a technology called data fusion.…”
The purpose of this article is to use the Internet of Things related technology to analyze the characteristics of multisource and easy-to-purchase data for the different types of planning data and different levels of cognitive needs of participants in the entire urban planning process. This paper uses the ontology idea to reconstruct the relationship between multisource and heterogeneous planning data including Internet of Things data, planning documents, and planning drawings, to design the data semantic relationship of the ontology model elements, define the relationship between the data types, and implement the ontology-based method. The semantic expression algorithm in the planning field facilitates the exchange of various planning participants’ understanding of the planning scheme, at the same time, according to the classification of multisource heterogeneous data features, logical reasoning of ontology relationships, filtering redundant information, and multisource heterogeneous planning data visualization. Finally, the information of the same nature collected by the sensor nodes of the Internet of Things is batched, and the calculated fusion information is closer to the true value through a series of weighting formulas. Experiments prove that the feature analysis method proposed in this paper can maintain a loss of 0.02% and achieve an accuracy rate of 79.1% when the overall characteristics of digital city planning are reduced by 67%, which effectively proves the multisource heterogeneous data feature analysis for digital city planning importance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.