Restoration is becoming a quite complex process: a large number of internal and external variables co-exist and may impair it. Among these, the large number of professionals involved and the huge amount of documentation produced can heavily affect the quality of the intervention as well as the possibility to have systemic and informed interventions in the future. In particular, a standardized method for storing restoration data and accessing them is still lacking, and the use of new technologies is still limited and/or not scalable. The paper describes the process of designing and testing an information system (IS) based on three-dimensional (3D) data, aimed to support the restoration of Neptune's Fountain in Bologna. In preparation of the restoration, a major effort was carried out to design and implement a web-based IS able to host all of the data produced, to allow the conservation-restoration specialists to interact on-site with an accurate 3D representation of the elements of the fountain, and to directly reference all information and data produced on the geometry of the model. The paper focuses on the challenges and adopted solutions related to the use of 3D models and the data mapping on 3D surfaces in the context of restoration documentation. Highly detailed visualizations of the models, easy navigation, and usable functionalities to add information directly on the 3D model have been achieved by extending the available solutions and by implementing new mechanisms to overcome the limitations of WebGL and remote rendering. Neptune IS' development has been extensively experimented in a real context of use. Results and knowledge from the experimentation currently represents the basis for evolving Neptune IS into a possible generic and flexible platform for documentation management in the field of restoration and related methodologies.
Every day new tools and algorithms for automated image processing and 3D reconstruction purposes become available, giving the possibility to process large networks of unoriented and markerless images, delivering sparse 3D point clouds at reasonable processing time. In this paper we evaluate some feature-based methods used to automatically extract the tie points necessary for calibration and orientation procedures, in order to better understand their performances for 3D reconstruction purposes. The performed tests -based on the analysis of the SIFT algorithm and its most used variants -processed some datasets and analysed various interesting parameters and outcomes (e.g. number of oriented cameras, average rays per 3D points, average intersection angles per 3D points, theoretical precision of the computed 3D object coordinates, etc.).
Nowadays digital replicas of artefacts belonging to the Cultural Heritage (CH) are one of the most promising innovations for museums exhibitions, since they foster new forms of interaction with collections, at different scales. However, practical digitization is still a complex task dedicated to specialized operators. Due to these premises, this paper introduces a novel approach to support non-experts working in museums with robust, easy-to-use workflows based on low-cost widespread devices, aimed at the study, classification, preservation, communication and restoration of CH artefacts. The proposed methodology introduces an automated combination of acquisition, based on mobile equipment and visualization, based on Real-Time Rendering. After the description of devices used along the workflow, the paper focuses on image pre-processing and geometry processing techniques adopted to generate accurate 3D models from photographs. Assessment criteria for the developed process evaluation are illustrated. Tests of the methodology on some effective museum case studies are presented and discussed.
Ensuring color fidelity in image-based 3D modeling of heritage scenarios is nowadays still an open research matter. Image colors are important during the data processing as they affect algorithm outcomes, therefore their correct treatment, reduction and enhancement is fundamental. In this contribution, we present an automated solution developed to improve the radiometric quality of an image datasets and the performances of two main steps of the photogrammetric pipeline (camera orientation and dense image matching). The suggested solution aims to achieve a robust automatic color balance and exposure equalization, stability of the RGB-to-gray image conversion and faithful color appearance of a digitized artifact. The innovative aspects of the article are: complete automation, better color target detection, a MATLAB implementation of the ACR scripts created by Fraser and the use of a specific weighted polynomial regression. A series of tests are presented to demonstrate the efficiency of the developed methodology and to evaluate color accuracy (‘color characterization’).
KEY WORDS: Pre-processing, Filtering, Matching, Automation, Denoising, RGB to Gray, Color, 3D reconstruction ABSTRACT Tools and algorithms for automated image processing and 3D reconstruction purposes have become more and more available, giving the possibility to process any dataset of unoriented and markerless images. Typically, dense 3D point clouds (or texture 3D polygonal models) are produced at reasonable processing time. In this paper, we evaluate how the radiometric pre-processing of image datasets (particularly in RAW format) can help in improving the performances of state-of-the-art automated image processing tools. Beside a review of common pre-processing methods, an efficient pipeline based on color enhancement, image denoising, RGB to Gray conversion and image content enrichment is presented. The performed tests, partly reported for sake of space, demonstrate how an effective image pre-processing, which considers the entire dataset in analysis, can improve the automated orientation procedure and dense 3D point cloud reconstruction, even in case of poor texture scenarios.
ABSTRACT:The paper presents some experiments carried out as part of the virtual reconstruction of buildings just documented by partial sketches, or partially built, or no more existing, with the aim (a) to emphasize the use of a semantic construction of the digital model, not only as a means to modeling a building but as a cognitive system, (b) to show conceptual similarity between the treaties and BIM, (c) to propose new and more robust solutions to the 3D modeling from 2D drawings for CH artifacts, able to allow the verification of the assumptions used during the reconstruction pipeline, (d) to make use of interactive technical reference, typically real-time photorealistic rendering, for the visualization of three-dimensional model and of variants snapshots, managed by an iconic for illustrating the method of comparison and guided reading of model's characters of the steps taken.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.