This paper considers whether a passive isometric input device, such as a ¢ ¤ £ ¦ ¥ § © ¥ TM , used together with visual feedback, could provide the operator with a pseudo-haptic feedback. For this aim, two psychophysical experiments have been conducted. The first experiment consisted of a compliance discrimination, between two virtual springs hand-operated by means of the ¢ ¤ £ ¥ § © ¥ TM. In this experiment, the stiffness (or compliance) JND turned out to be 6%. The second experiment assessed stiffness discrimination between a virtual spring and the equivalent spring in reality. In this case, the stiffness (or compliance) JND was found to be 13.4%. These results are consistent with previous outcomes on manual discrimination of compliance. Consequently, this consistency reveals that the passive apparatus that was used can, to some extent, simulate haptic information. In addition, a final test indicated that the proprioceptive sense of the subjects was blurred by visual feedback. This gave them the illusion of using a non isometric device.
This paper introduces a fast continuous collision detection technique for polyhedral rigid bodies. As opposed to most collision detection techniques, the computation of the first contact time between two objects is inherently part of the algorithm. The method can thus robustly prevent objects interpenetrations or collisions misses, even when objects are thin or have large velocities. The method is valid for general objects (polygon soups), handles multiple moving objects and acyclic articulated bodies, and is efficient in low and high coherency situations. Moreover, the method can be used to speed up existent continuous collision detection methods for parametric or implicit rigid surfaces. The collision detection algorithms have been successfully coupled to a real-time dynamics simulator. Various experiments are conducted that show the method's ability to produce high-quality interaction (precise objects positioning for example) between models up to tens of thousands of triangles, which couldn't have been performed with previous continuous methods.Discrete methods Most previous collision detection methods are discrete: they sample the objects motions and detect objects interpenetrations (see for example 1,2,6,10,11,12,13,17,25,29 ). As a result, these methods may miss collisions (tunneling effect). While an adaptative time-step and predictive methods can be used to correct this problem in offline applications, this may not be suitable in interactive applications when a relatively high and constant frame-rate is required. Moreover, discrete collision detection requires backtracking methods to compute the first contact time, which is necessary in constraintbased analytical dynamics simulations. Depending on the Figure 1. Precise car door positioning. The continuous collision detection technique described in this paper allows to precisely (without any objects interpenetration) and interactively position the door. The car skeleton is about 29000 triangles. The door is about 16000 triangles (3d models ©Renault).
BackgroundLearning to perform new movements is usually achieved by following visual demonstrations. Haptic guidance by a force feedback device is a recent and original technology which provides additional proprioceptive cues during visuo-motor learning tasks. The effects of two types of haptic guidances-control in position (HGP) or in force (HGF)–on visuo-manual tracking (“following”) of trajectories are still under debate.Methodology/Principals FindingsThree training techniques of haptic guidance (HGP, HGF or control condition, NHG, without haptic guidance) were evaluated in two experiments. Movements produced by adults were assessed in terms of shapes (dynamic time warping) and kinematics criteria (number of velocity peaks and mean velocity) before and after the training sessions. Trajectories consisted of two Arabic and two Japanese-inspired letters in Experiment 1 and ellipses in Experiment 2. We observed that the use of HGF globally improves the fluency of the visuo-manual tracking of trajectories while no significant improvement was found for HGP or NHG.Conclusion/SignificanceThese results show that the addition of haptic information, probably encoded in force coordinates, play a crucial role on the visuo-manual tracking of new trajectories.
Unilateral spatial neglect is a disabling condition frequently occurring after stroke. People with neglect suffer from various spatial deficits in several modalities, which in many cases impair everyday functioning. A successful treatment is yet to be found. Several techniques have been proposed in the last decades, but only a few showed long-lasting effects and none could completely rehabilitate the condition. Diagnostic methods of neglect could be improved as well. The disorder is normally diagnosed with pen-and-paper methods, which generally do not assess patients in everyday tasks and do not address some forms of the disorder. Recently, promising new methods based on virtual reality have emerged. Virtual reality technologies hold great opportunities for the development of effective assessment and treatment techniques for neglect because they provide rich, multimodal, and highly controllable environments. In order to stimulate advancements in this domain, we present a review and an analysis of the current work. We describe past and ongoing research of virtual reality applications for unilateral neglect and discuss the existing problems and new directions for development.
This paper describes a generalization of the god-object method for haptic interaction between rigid bodies. Our approach separates the computation of the motion of the six degree-of-freedom god-object from the computation of the force applied to the user. The motion of the god-object is computed using continuous collision detection and constraint-based quasi-statics, which enables high-quality haptic interaction between contacting rigid bodies. The force applied to the user is computed using a novel constraint-based quasi-static approach, which allows us to suppress force artifacts typically found in previous methods. The constraint-based force applied to the user, which handles any number of simultaneous contact points, is computed within a few microseconds, while the update of the configuration of the rigid god-object is performed within a few milliseconds for rigid bodies containing up to tens of thousands of triangles. Our approach has been successfully tested on complex benchmarks. Our results show that the separation into asynchronous processes allows us to satisfy the different update rates required by the haptic and visual displays. Force shading and textures can be added and enlarge the range of haptic perception of a virtual environment. This paper is an extension of [1].
Three-dimensional user interfaces (3D UIs) let users interact with virtual objects, environments, or information using direct 3D input in the physical and/or virtual space. In this article, the founders and organizers of the IEEE Symposium on 3D User Interfaces reflect on the state of the art in several key aspects of 3D UIs and speculate on future research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.