Energy retrofitting is paramount to reduce the use of energy in existing buildings, with benefits to the environment and people's economy. The increasing use of novel technologies and innovative methodologies, such as Terrestrial Laser Scanning (TLS) and Building Information Modelling (BIM), is contributing to optimise retrofit processes. In the context of energy efficiency retrofitting, complex semantic 3D BIM models are required that include specific information, such as second level space boundaries (2LSBs), material energy performance properties, and information of the Heating Ventilation and Air Conditioning (HVAC) system and their layout. All this information is necessary for energy analysis of the existing building and planning of effective retrofitting strategies. In this paper, we present an integrated (semi-)automated Scan-to-BIM approach to produce BIM models from point clouds and photographs of buildings by means of computer-vision and artificial intelligence techniques, as well as a Graphical User Interface (GUI) that enables the user to complete the models with information that cannot be retrieved by means of visual features. Information about the materials and their performance properties as well as the specification of the HVAC component is obtained from a database that integrates information from BAUBOOK, OKOBAUDAT and ASHRAE. The Scan-to-BIM tool introduced in this paper is evaluated with data from an inhabited two-storey building, delivering promising results in energy simulations.
Scan-to-BIM systems convert image and point cloud data into accurate 3D models of buildings. Research on Scan-to-BIM has largely focused on the automated identification of structural components. However, design and maintenance projects require information on a range of other assets including mechanical, electrical, and plumbing (MEP) components. This paper presents a deep learning solution that locates and labels MEP components in 360 • images and phone images, specifically sockets, switches and radiators. The classification and location data generated by this solution could add useful context to BIM models. The system developed for this project uses transfer learning to retrain a Faster Region-based Convolutional Neural Network (Faster R-CNN) for the MEP use case. The performance of the neural network across image formats is investigated. A dataset of 249 360 • images and 326 phone images was built to train the deep learning model. The Faster R-CNN achieved high precision and comparatively low recall across all image formats.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.