Use of virtual reality (VR) is considered beneficial for reviewing 3D models throughout product design. However, research on its usability in the design field is still explorative, and previous studies are often contradictory regarding the usability of VR for 3D model review. This paper argues that the usability of VR should be assessed by analysing human factors such as spatial perception and taking into consideration the complexity of the reviewed product. Hence, a comparative evaluation study has been conducted to assess spatial perception in desktop interface-based and VR-based review of 3D models of products with different levels of complexity. The results show that participants in VR more could perceive the fit of user interface elements, and estimation of the model dimensions had a lower relative error than in desktop interface. It has been found that various sensory cues are used to perceive the model size and that the employed sensory cues depend on the level of complexity. Finally, it is proposed that differences between a desktop interface and VR for reviewing models are more evident when reviewing models of higher complexity levels.
Users may download and print one copy of any publication from the public portal for the purpose of private study or research. You may not further distribute the material or use it for any profit-making activity or commercial gain You may freely distribute the URL identifying the publication in the public portal If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
This paper provides an overview and appraisal of the International Design Engineering Annual (IDEA) challenge - a virtually hosted design hackathon run with the aim of generating a design research dataset that can provide insights into design activities at virtually hosted hackathons. The resulting dataset consists of 200+ prototypes with over 1300 connections providing insights into the products, processes and people involved in the design process. The paper also provides recommendations for future deployments of virtual hackathons for design research.
The conventional prescriptive and descriptive models of design typically decompose the overall design process into elementary processes, such as analysis, synthesis, and evaluation. This study revisits some of the assumptions established by these models and investigates whether they can also be applied for modelling of problem-solution co-evolution patterns that appear during team conceptual design activities. The first set of assumptions concerns the relationship between performing analysis, synthesis, and evaluation and exploring the problem and solution space. The second set concerns the dominant sequences of analysis, synthesis, and evaluation, whereas the third set concerns the nature of transitions between the problem and solution space. The assumptions were empirically tested as part of a protocol analysis study of team ideation and concept review activities. Besides revealing inconsistencies in how analysis, synthesis, and evaluation are defined and interpreted across the literature, the study demonstrates co-evolution patterns, which cannot be described by the conventional models. It highlights the important role of analysis-synthesis cycles during both divergent and convergent activities, which is co-evolution and refinement, respectively. The findings are summarised in the form of a model of the increase in the number of new problem and solution entities as the conceptual design phase progresses, with implications for both design research and design education.
This paper presents the results of computational experiments aimed at studying the effect of experience on design teams’ exploration of problem-solution space. An agent-based model of a design team was developed and its capability to match theoretically-based predictions is tested. Hypotheses that (1) experienced teams need less time to find a solution and that (2) in comparison to the inexperienced teams, experienced teams spend more time exploring the solution-space than the problem-space, were tested. The results provided support for both of the hypotheses, demonstrating the impact of learning and experience on the exploration patterns in problem and solution space, and verifying the system's capability to produce the reliable results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.