The documentation, dissemination, and enhancement of Cultural Heritage is of great relevance. To that end, technological tools and interactive solutions (e.g., 3D models) have become increasingly popular. Historical silk fabrics are nearly flat objects, very fragile and with complex internal geometries, related to different weaving techniques and types of yarns. These characteristics make it difficult to properly document them, at the yarn level, with current technologies. In this paper, we bring a new methodology to virtually represent such heritage and produce 3D printouts, also making it highly interactive through the tool Virtual Loom. Our work involves sustainability from different perspectives: (1) The traditional production of silk fabrics respects the environment; (2) Virtual Loom allows the studying of silk heritage while avoiding their degradation; (3) Virtual Loom allows creative industries to save money and materials; (4) current research on bioplastics for 3D printing contributes to environmental sustainability; (5) edutainment and gaming can also benefit from Virtual Loom, avoiding the need to acquire the original objects and enhancing creativity. The presented work has been carried out within the scope of the SILKNOW project to show some results and discuss the sustainability issues, from the production of traditional silk fabrics, to their dissemination by means of Virtual Loom and 3D printed shapes.
Abstract3D modelling of man-made objects is widely used in the cultural heritage sector, among others. It is relevant for its documentation, dissemination and preservation. Related to historical fabrics, weaves and weaving techniques are still mostly represented in forms of 2D graphics and textual descriptions. However, complex geometries are difficult to represent in such forms, hindering the way this legacy is transmitted to new generations. In this paper, we present the design and implementation of SILKNOW’s Virtual Loom, an interactive tool aimed to document, preserve and represent in interactive 3D forms historical weaves and weaving techniques of silk fabrics, dating from the 15th to the 19th centuries. To that end, our tool only requires an image of a historical fabric. Departing from this image, the tool automatically subtracts the design, and allows the user to apply different weaves and weaving techniques. In its current version, the tool embeds five traditional weaving techniques, 39 weaves and six types of yarns, which have been defined thanks to close collaboration of experts in computer graphics, art history and historical fabrics. Additionally, users can change the color of yarns and produce different 3D representations for a given fabric, which are interactive in real time. In this paper, we bring the details of the design and implementation of this tool, focusing on the input data, the strategy to process images, the 3D modelling of yarns, the definition of weaves and weaving techniques and the graphical user interface. In the results section, we show some examples of image analysis in order to subtract the design of historical fabrics, and then we provide 3D representations for all the considered weaving techniques, combining different types of yarns.
Due to the increasing use of data analytics, information visualization is getting more and more important. However, as data get more complex, so does visualization, often leading to ad hoc and cumbersome solutions. A recent alternative is the use of the so-called knowledge-assisted visualization tools. In this paper, we present STMaps (Spatio-Temporal Maps), a multipurpose knowledge-assisted ontology-based visualization tool of spatio-temporal data. STMaps has been (originally) designed to show, by means of an interactive map, the content of the SILKNOW project, a European research project on silk heritage. It is entirely based on ontology support, as it gets the source data from an ontology and uses also another ontology to define how data should be visualized. STMaps provides some unique features. First, it is a multi-platform application. It can work embedded in an HTML page and can also work as a standalone application over several computer architectures. Second, it can be used for multiple purposes by just changing its configuration files and/or the ontologies on which it works. As STMaps relies on visualizing spatio-temporal data provided by an ontology, the tool could be used to visualize the results of any domain (in other cultural and non-cultural contexts), provided that its datasets contain spatio-temporal information. The visualization mechanisms can also be changed by changing the visualization ontology. Third, it provides different solutions to show spatio-temporal data, and also deals with uncertain and missing information. STMaps has been tested to browse silk-related objects, discovering some interesting relationships between different objects, showing the versatility and power of the different visualization tools proposed in this paper. To the best of our knowledge, this is also the first ontology-based visualization tool applied to silk-related heritage.
Augmented Reality (AR) annotations are a powerful way of communication when collaborators cannot be present at the same time in a given environment. However, this situation presents several challenges, for example: how to record the AR annotations for later consumption, how to align virtual and real world in unprepared environments or how to offer the annotations to users with different AR devices. In this paper we present a cross-device AR annotation method that allows users to create and display annotations asynchronously in environments without the need for prior preparation (AR markers, point cloud capture, etc.). This is achieved through an easy user-assisted calibration process and a data model that allows any type of annotation to be stored on any device. The experimental study carried out with 40 participants has verified our two hypotheses: we are able to visualize AR annotations in indoor environments without prior preparation regardless of the device used and the overall usability of the system is satisfactory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.