The usefulness of modern day haptics equipment for virtual simulations of actual maintenance actions is examined. In an effort to categorize which areas haptic simulations may be useful, we have developed a taxonomy for haptic actions. This classification has two major dimensions: the general type of action performed and the type of force or torque required. Building upon this taxonomy, we selected three representative tasks from the taxonomy to evaluate in a virtual reality simulation. We conducted a series of human subject experiments to compare user performance and preference on a disassembly task with and without haptic feedback using CyberGlove, Phantom, and SpaceMouse interfaces. Analysis of the simulation runs shows Phantom users learned to accomplish the simulated actions significantly more quickly than did users of the CyberGlove or the SpaceMouse. Moreover a lack of differences in the post-experiment questionnaire suggests that haptics research should include a measure of actual performance speed or accuracy rather than relying solely on subjective reports of a device's ease of use.Keywords assembling, digital simulation, haptic interfaces, virtual reality, CyberGlove, Phantom, SpaceMouse interfaces, disassembly tasks, haptic actions, haptic feedback, haptic simulations, haptics research, human subject experiments, simulated actions, simulation runs, user performance, virtual reality simulation This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of the University of Pennsylvania's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to pubs-permissions@ieee.org. By choosing to view this document, you agree to all provisions of the copyright laws protecting it. Author(s)Aaron
We present a new data set of 1014 images with manual segmentations and semantic labels for each segment, together with a methodology for using this kind of data for recognition evaluation. The images and segmentations are from the UCB segmentation benchmark database (Martin et al., in International conference on computer vision, vol. II, pp. 416-421, 2001). The database is extended by manually labeling each segment with its most specific semantic concept in WordNet (Miller et al., in Int. ology establishes protocols for mapping algorithm specific localization (e.g., segmentations) to our data, handling synonyms, scoring matches at different levels of specificity, dealing with vocabularies with sense ambiguity (the usual case), and handling ground truth regions with multiple labels. Given these protocols, we develop two evaluation approaches. The first measures the range of semantics that an algorithm can recognize, and the second measures the frequency that an algorithm recognizes semantics correctly. The data, the image labeling tool, and programs implementing our evaluation strategy are all available on-line (kobus.ca//research/data/IJCV_2007).We apply this infrastructure to evaluate four algorithms which learn to label image regions from weakly labeled data. The algorithms tested include two variants of multiple instance learning (MIL), and two generative multi-modal mixture models. These experiments are on a significantly larger scale than previously reported, especially in the case of MIL methods. More specifically, we used training data sets up to 37,000 images and training vocabularies of up to 650 words.We found that one of the mixture models performed best on image annotation and the frequency correct measure, and that variants of MIL gave the best semantic range performance. We were able to substantively improve the performance of MIL methods on the other tasks (image annotation and frequency correct region labeling) by providing an appropriate prior.
This paper discusses work executed for the development of a Ground Operated Teleoperation System for live-line maintenance of Hydro-QuCbec's overhead distribution network. It covers the system's development, engineering and implementation as well as forecasted research with the prototype. Related research conducted in conjunction with the system's development is explained further along. Finally, some other fields to which the technology can be applied are described.The operator, along with the control interfaces, was brought to the ground and put in a cabin electrically insulated from the power lines. The task scene information is given to the operator by a stereoscopic monitor coupled to a stereo camera mounted on a high speed pan & tilt unit. A set of auxiliary CCD cameras, on either sides of the platform, supplement the visual feedback to the operator. A series of sensors provide positional feedback of the different components on the platform. Figure 1 below shows the main constituents of the system. MANIPULATION PLATFORM
The objective of this study is to compare human performance in executing tasks with a helmet-mounted display interface using different visual cues of depth perception.The study involves two experiments, the first, with direct viewing, the second, with a helmet-mounted display (HMD). These experiments are designed to assess the subject's stereoacuity in an alignment task involving two rods, one mobile, the other fixed. In both experiments, the subject has no time constraints and simply has to perform the task as well as possible. The dependent variable is the depth positioning error. Ten subjects with a stereoacuity of 20 arc-seconds or less and 20/20 visual acuity (Snellen test) corrected or not took part in this study. In all experiments, the subject was exposed to four viewing conditions in direct view or HMD: mono-stationary, stereo-stationary, mono with motion parallax and stereo with motion parallax. The independent variables are the presence of stereo (with vs. without), the presence of motion parallax (with vs. without) and the session (session 1 or session 2). ANOVA 2 X 2 X 2 statistical processing is used.This study is part of an attempt to develop a teleoperation system for power distribution line maintenance work [1 ,2]. Two control configurations are considered and studied, raised or direct viewing in which the operator is positioned in a control cabin supported by an aerial platform and thus has a direct view of the task to be done, and a ground-level teleoperation technique in which the operator is positioned in a remote-control cabin and receives information about the task from a display interface.A first prototype of the ground-level teleoperation system has been developed in an aim to study human performance in using such a system. The prototype comprises a stereoscopic display interface. The stereoscopic video camera is located at the operator's eye level in direct viewing. The results obtained with the prototype revealed that it is in fact possible to perform typical lineman tasks with the ground-level teleoperation technique and that the performance of linemen with this technique even at this early stage is already half way toward reaching the performance obtained with the direct-viewing technique. Details about these experiments and the results are presented in references [1] and [2].In teleoperation, the operator's performance and degree of work safety are strongly dependent on the quality of the visual, audio and kinesthetic information received [3]. In direct vision, the operator scales depth on the basis of stereopsis and motion parallax and also of monocular cues such as interposition, shadow effects, linear perspective, gradient of textures and relative sizes. In the case of the display interface of a ground-level cabin, the view (from the point of view of acuity and stereoacuity) must be as close as possible to the direct view of the work area. O-8194-1954-O/95/$6.OO SPIE Vol. 2590 / 151 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 06/21/2016 Terms of Use: http://spie...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.