Spacedesign is an innovative Mixed Reality (MR) application addressed to aesthetic design of free form curves and surfaces. It is a unique and comprehensive approach which uses task-specific configurations to support the design workflow from concept to mock-up evaluation and review. The first-phase conceptual design benefits from a workbench-like 3-D display for free hand sketching, surfacing and engineering visualization. Semitransparent stereo glasses augment the pre-production physical prototype by additional shapes, textures and annotations. Both workspaces share a common interface and allow collaboration and cooperation between different experts, who can configure the system for the specific task. A faster design workflow and CAD data consistency can be thus naturally achieved. Tests and collaborations with designers, mainly from automotive industry, are providing systematic feedback for this ongoing research. As far as the authors are concerned, there is no known similar approach that integrates the creation and editing phase of 3D curves and surfaces in Virtual and Augmented Reality (VR/AR). Herein we see the major contribution of our new application.
ABSTRACT:The rapid technological evolution, which is characterizing all the disciplines involved within the wide concept of smart cities, becomes a key factor to trigger true user-driven innovation. In this context 3D city models will play an increasingly important role in our daily lives and become an essential part of the modern city information infrastructure (Spatial Data Infrastructure). The goal of this paper is to introduce the i-SCOPE (interoperable Smart City services through an Open Platform for urban Ecosystems) project methodology and implementations together with key technologies and open standards. Based on interoperable 3D CityGML UIMs, the aim of i-Scope is to deliver an open platform on top of which it possible to develop, within different domains, various 'smart city' services. Moreover, in i-SCOPE different issues, transcending the mere technological domain, are being tackled, including aspects dealing with social and environmental issues. Indeed several tasks including citizen awareness, crowd source and voluntary based data collection as well as privacy issue concerning involved people should be considered. * Corresponding author. This is useful to know for communication with the appropriate person in cases with more than one author.
The present work aims to propose a new approach for defining interactive user manual in complex assemblies, using a new enabling technology of Industry 4.0, i.e. Augmented Reality. The AR environment supports the user in stepby-step assembly on-the-fly. The study of this method, suitable to realize the assembly of parts, is a stimulating engineering mission, which can take advantage from the latest innovations in imagining technologies and computer graphics. In the present paper, a proposal for an innovative method based on Augmented Reality useful to support the components' assembly is suggested. The methodology illustrated is based upon a four steps process: at the beginning, the designer performs the assembly structure through a CAD system. In a second time, an unexperienced user assembles the same parts without any suggestion: the differences between two assembly sequences are documented and broken down in order to distinguish critical points in the assembly. Finally, a virtual user manual is shaped in Augmented Reality environment. The assembly is then performed by the same unexperienced user, guided by the AR tool. When the end user practices the instrument, the location of the item to assemble is perceived by tracking the finger position of the user itself. In order to help the end-user in the assembly procedure, a series of symbols and texts is added to the external scene. In this paper a case study based on the assembly of a scale model has been developed to evaluate the methodology. After an evaluation process, the procedure seems to be feasible and presents some advantages over the state-of-the-art methodologies proposed by literature.
Information visualization has been widely adopted to represent and visualize data patterns as it offers users fast access to data facts and can highlight specific points beyond plain figures and words. As data comes from multiple sources, in all types of formats, and in unprecedented volumes, the need intensifies for more powerful and effective data visualization tools. In the manufacturing industry, immersive technology can enhance the way users artificially perceive and interact with data linked to the shop floor. However, showcases of prototypes of such technology have shown limited results. The low level of digitalization, the complexity of the required infrastructure, the lack of knowledge about Augmented Reality (AR), and the calibration processes that are required whenever the shop floor configuration changes hinders the adoption of the technology. In this paper, we investigate the design of middleware that can automate the configuration of X-Reality (XR) systems and create tangible in-site visualizations and interactions with industrial assets. The main contribution of this paper is a middleware architecture that enables communication and interaction across different technologies without manual configuration or calibration. This has the potential to turn shop floors into seamless interaction spaces that empower users with pervasive forms of data sharing, analysis and presentation that are not restricted to a specific hardware configuration. The novelty of our work is due to its autonomous approach for finding and communicating calibrations and data format transformations between devices, which does not require user intervention. Our prototype middleware has been validated with a test case in a controlled digital-physical scenario composed of a robot and industrial equipment.
Constant improvements in the field of surveying, computing and distribution of digital-content are reshaping the way Cultural Heritage can be digitised and virtually accessed, even remotely via web. A traditional 2D approach for data access, exploration, retrieval and exploration may generally suffice, however more complex analyses concerning spatial and temporal features require 3D tools, which, in some cases, have not yet been implemented or are not yet generally commercially available. Efficient organisation and integration strategies applicable to the wide array of heterogeneous data in the field of Cultural Heritage represent a hot research topic nowadays. This article presents a visualisation and query tool (QueryArch3D) conceived to deal with multi-resolution 3D models. Geometric data are organised in successive levels of detail (LoD), provided with geometric and semantic hierarchies and enriched with attributes coming from external data sources. The visualisation and query front-end enables the 3D navigation of the models in a virtual environment, as well as the interaction with the objects by means of queries based on attributes or on geometries. The tool can be used as a standalone application, or served through the web. The characteristics of the research work, along with some implementation issues and the developed QueryArch3D tool will be discussed and presented.
ABSTRACT:Nowadays, rapid technological development into acquiring geo-spatial information; joined to the capabilities to process these data in a relative short period of time, allows the generation of detailed 3D textured city models that will become an essential part of the modern city information infrastructure (Spatial Data Infrastructure) and, can be used to integrate various data from different sources for public accessible visualisation and many other applications. One of the main bottlenecks, which at the moment limit the use of these datasets to few experts, is a lack on efficient visualization systems through the web and interoperable frameworks that allow standardising the access to the city models. The work presented in this paper tries to satisfy these two requirements developing a 3D web-based visualization system based on OGC standards and effective visualization concepts. The architectural framework, based on Services Oriented Architecture (SOA) concepts, provides the 3D city data to a web client designed to support the view process in a very effective way. The first part of the work is to design a framework compliant to the 3D Portrayal Service drafted by the of the Open Geospatial Consortium (OGC) 3D standardization working group. The latter is related to the development of an effective web client able to render in an efficient way the 3D city models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.