Over the past few years, the application of camera-equipped Unmanned Aerial Vehicles (UAVs) for visually monitoring construction and operation of buildings, bridges, and other types of civil infrastructure systems has exponentially grown. These platforms can frequently survey construction sites, monitor work-in-progress, create documents for safety, and inspect existing structures, particularly for hard-to-reach areas. The purpose of this paper is to provide a concise review of the most recent methods that streamline collection, analysis, visualization, and communication of the visual data captured from these platforms, with and without using Building Information Models (BIM) as a priori information. Specifically, the most relevant works from Civil Engineering, Computer Vision, and Robotics communities are presented and compared in terms of their potential to lead to automatic construction monitoring and civil infrastructure condition assessment.
The ever increasing volume of visual data due to recent advances in smart devices and camera-equipped platforms provides an unprecedented opportunity to visually capture actual status of construction sites at a fraction of cost compared to other alternatives methods. Most efforts on documenting as-built status, however, stay at collecting visual data and updating BIM. Hundreds of images and videos are captured but most of them soon become useless without properly being localized with plan document and time. To take full advantage of visual data for construction performance analytics, three aspects (reliability, relevance, and speed) of capturing, analyzing, and reporting visual data are critical. This paper 1) investigates current strategies for leveraging emerging big visual data and BIM in construction performance monitoring from these three aspects, 2) characterizes gaps in knowledge via case studies and structures a road map for research in visual sensing and analytics.
The current practice of surgical pathology relies on external contrast agents to reveal tissue architecture, which is then qualitatively examined by a trained pathologist. The diagnosis is based on the comparison with standardized empirical, qualitative assessments of limited objectivity. We propose an approach to pathology based on interferometric imaging of “unstained” biopsies, which provides unique capabilities for quantitative diagnosis and automation. We developed a label-free tissue scanner based on “quantitative phase imaging,” which maps out optical path length at each point in the field of view and, thus, yields images that are sensitive to the “nanoscale” tissue architecture. Unlike analysis of stained tissue, which is qualitative in nature and affected by color balance, staining strength and imaging conditions, optical path length measurements are intrinsically quantitative, i.e., images can be compared across different instruments and clinical sites. These critical features allow us to automate the diagnosis process. We paired our interferometric optical system with highly parallelized, dedicated software algorithms for data acquisition, allowing us to image at a throughput comparable to that of commercial tissue scanners while maintaining the nanoscale sensitivity to morphology. Based on the measured phase information, we implemented software tools for autofocusing during imaging, as well as image archiving and data access. To illustrate the potential of our technology for large volume pathology screening, we established an “intrinsic marker” for colorectal disease that detects tissue with dysplasia or colorectal cancer and flags specific areas for further examination, potentially improving the efficiency of existing pathology workflows.
With advances in Building Information Modeling (BIM), Virtual Reality (VR) and Augmented Reality (AR) technologies have many potential applications in the Architecture, Engineering, and Construction (AEC) industry. However, the AEC industry, relative to other industries, has been slow in adopting AR/VR technologies, partly due to lack of feasibility studies examining the actual cost of implementation versus an increase in profit. The main objectives of this paper are to understand the industry trends in adopting AR/VR technologies and identifying gaps within the industry. The identified gaps can lead to opportunities for developing new tools and finding new use cases. To achieve these goals, two rounds of a survey at two different time periods (a year apart) were conducted. Responses from 158 industry experts and researchers were analyzed to assess the current state, growth, and saving opportunities for AR/VR technologies for the AEC industry. The findings demonstrate that older generations are significantly more confident about the future of AR/VR technologies and they see more benefits in AR/VR utilization. Furthermore, the research results indicate that Residential and commercial sectors have adopted these tools the most, compared to other sectors and institutional and transportation sectors had the highest growth from 2017 to 2018. Industry experts anticipated a solid growth in the use of AR/VR technologies in 5 to 10 years, with the highest expectations towards healthcare. Ultimately, the findings show a significant increase in AR/VR utilization in the AEC industry from 2017 to 2018.
Although adherence to project schedules and budgets is most highly valued by project owners, more than 53% of typical construction projects are behind schedule and more than 66% suffer from cost overruns, partly due to inability to accurately capture construction progress. To address these challenges, this paper presents new geometry-and appearance-based reasoning methods for detecting construction progress, which has the potential to provide more frequent progress measures using visual data that are already being collected by general contractors. The initial step of geometry-based filtering detects the state of construction of Building Information Modeling (BIM) elements (e.g. in-progress, completed). The next step of appearance-based reasoning captures operation-level activities by recognizing different material types. Two methods have been investigated for the latter step: a texture-based reasoning for image-based 3D point clouds and color-based reasoning for laser scanned point clouds. This paper presents two case studies for each reasoning approach for validating the proposed methods. The results demonstrate the effectiveness and practical significances of the proposed methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.