Augmented Reality (AR) is a rapidly developing field with numerous potential applications. For example, building developers, public authorities, and other construction industry stakeholders need to visually assess potential new developments with regard to aesthetics, health and safety, and other criteria. Current state‐of‐the‐art visualization technologies are mainly fully virtual, while AR has the potential to enhance those visualizations by observing proposed designs directly within the real environment.
A novel AR system is presented, that is most appropriate for urban applications. It is based on monocular vision, is markerless, and does not rely on beacon‐based localization technologies (like GPS) or inertial sensors. Additionally, the system automatically calculates occlusions of the built environment on the augmenting virtual objects.
Three datasets from real environments presenting different levels of complexity (geometrical complexity, textures, occlusions) are used to demonstrate the performance of the proposed system. Videos augmented with our system are shown to provide realistic and valuable visualizations of proposed changes of the urban environment. Limitations are also discussed with suggestions for future work.
The measurement in metric units from a perspective view of a real world object is an important task in many computer vision applications. The goal of the perspective calibration is to map the reference coordinates of the 3D object into the 2D image coordinates. This is usually done by recovering the perspective transformation parameters by presenting to the camera a view of a known calibration pattern. Many of the available algorithms require to process all the points of the calibration pattern in order to recover the transformation parameters effectively. The approach we present for the computation of the inverse perspective transformation does not require to consider all the grid points. Rather, a grid matching between the 3D points projection and the model is performed exploiting a graph based approach. Extensive experiments have proved the efficacy of the algorithm in different application fields such as camera calibration and automotive headlight beam characterization.
Tools for high-throughput high-content image analysis can simplify and expedite different stages of biological experiments, by processing and combining different information taken at different time and in different areas of the culture. Among the most important in this field, image mosaicing methods provide the researcher with a global view of the biological sample in a unique image. Current approaches rely on known motorized x-y stage offsets and work in batch mode, thus jeopardizing the interaction between the microscopic system and the researcher during the investigation of the cell culture. In this work we present an approach for mosaicing of optical microscope imagery, based on local image registration and exploiting visual information only. To our knowledge, this is the first approach suitable to work on-line with non-motorized microscopes. To assess our method, the quality of resulting mosaics is quantitatively evaluated through on-purpose image metrics. Experimental results show the importance of model selection issues and confirm the soundness of our approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.