Modern imaging methods like computed tomography (CT) generate 3-D volumes of image data. How do radiologists search through such images? Are certain strategies more efficient? Although there is a large literature devoted to understanding search in 2-D, relatively little is known about search in volumetric space. In recent years, with the ever-increasing popularity of volumetric medical imaging, this question has taken on increased importance as we try to understand, and ultimately reduce, errors in diagnostic radiology. In the current study, we asked 24 radiologists to search chest CTs for lung nodules that could indicate lung cancer. To search, radiologists scrolled up and down through a "stack" of 2-D chest CT "slices." At each moment, we tracked eye movements in the 2-D image plane and coregistered eye position with the current slice. We used these data to create a 3-D representation of the eye movements through the image volume. Radiologists tended to follow one of two dominant search strategies: "drilling" and "scanning." Drillers restrict eye movements to a small region of the lung while quickly scrolling through depth. Scanners move more slowly through depth and search an entire level of the lung before moving on to the next level in depth. Driller performance was superior to the scanners on a variety of metrics, including lung nodule detection rate, percentage of the lung covered, and the percentage of search errors where a nodule was never fixated.
Figure 1: inFORM enables new interaction techniques for shape-changing UIs. Left to right: On-demand UI elements through Dynamic Affordances; Guiding interaction with Dynamic Constraints; Object actuation; Physical rendering of content and UI. ABSTRACTPast research on shape displays has primarily focused on rendering content and user interface elements through shape output, with less emphasis on dynamically changing UIs. We propose utilizing shape displays in three different ways to mediate interaction: to facilitate by providing dynamic physical affordances through shape change, to restrict by guiding users with dynamic physical constraints, and to manipulate by actuating physical objects. We outline potential interaction techniques and introduce Dynamic Physical Affordances and Constraints with our inFORM system, built on top of a stateof-the-art shape display, which provides for variable stiffness rendering and real-time user input through direct touch and tangible interaction. A set of motivating examples demonstrates how dynamic affordances, constraints and object actuation can create novel interaction possibilities.
Figure 1. Rubbing and tapping gestures activate operations while the user is touching the display, so that additional parameter control and functionality can be activated during the fluid interaction. (a) Rubbing in and (b) rubbing out support two operations. (c) Bimanual interaction on single-touch displays is simulated with a set of "tapping" techniques, where operations are executed by tapping with a secondary finger (left), while the primary finger (right) is touching the display. ABSTRACTWe introduce two families of techniques, rubbing and tapping, that use zooming to make precise interaction on passive touch screens possible. Rub-Pointing uses a diagonal rubbing gesture to integrate pointing and zooming in a single-handed technique. In contrast, Zoom-Tapping is a twohanded technique in which the dominant hand points, while the non-dominant hand taps to zoom, simulating multitouch functionality on a single-touch display. Rub-Tapping is a hybrid technique that integrates rubbing with the dominant hand to point and zoom, and tapping with the nondominant hand to confirm selection. We describe the results of a formal user study comparing these techniques with each other and with the well-known Take-Off and ZoomPointing selection techniques. Rub-Pointing and ZoomTapping had significantly fewer errors than Take-Off for small targets, and were significantly faster than Take-Off and Zoom-Pointing. We show how the techniques can be used for fluid interaction in an image viewer and in existing applications, such as Google Maps.
We describe an approach to 3D multimodal interaction in immersive augmented and virtual reality environments that accounts for the uncertain nature of the information sources. The resulting multimodal system fuses symbolic and statistical information from a set of 3D gesture, spoken language, and referential agents. The referential agents employ visible or invisible volumes that can be attached to 3D trackers in the environment, and which use a timestamped history of the objects that intersect them to derive statistics for ranking potential referents. We discuss the means by which the system supports mutual disambiguation of these modalities and information sources, and show through a user study how mutual disambiguation accounts for over 45% of the successful 3D multimodal interpretations. An accompanying video demonstrates the system in action.
Malleable and organic user interfaces have the potential to enable radically new forms of interactions and expressiveness through flexible, free-form and computationally controlled shapes and displays. This work, specifically focuses on particle jamming as a simple, effective method for flexible, shape-changing user interfaces where programmatic control of material stiffness enables haptic feedback, deformation, tunable affordances and control gain. We introduce a compact, low-power pneumatic jamming system suitable for mobile devices, and a new hydraulic-based technique with fast, silent actuation and optical shape sensing. We enable jamming structures to sense input and function as interaction devices through two contributed methods for high-resolution shape sensing using: 1) index-matched particles and fluids, and 2) capacitive and electric field sensing. We explore the design space of malleable and organic user interfaces enabled by jamming through four motivational prototypes that highlight jamming's potential in HCI, including applications for tabletops, tablets and for portable shape-changing mobile devices.
In this work we present how Augmented Reality (AR) can be used to create an intimate integration of process data with the workspace of an industrial CNC (computer numerical control) machine. AR allows us to combine interactive computer graphics with real objects in a physical environment -in this case, the workspace of an industrial lathe. ASTOR is an autostereoscopic optical see-through spatial AR system, which provides real-time 3D visual feedback without the need for user-worn equipment, such as head-mounted displays or sensors for tracking. The use of a transparent holographic optical element, overlaid onto the safety glass, allows the system to simultaneously provide bright imagery and clear visibility of the tool and workpiece. The system makes it possible to enhance visibility of occluded tools as well as to visualize real-time data from the process in the 3D space. The graphics are geometrically registered with the workspace and provide an intuitive representation of the process, amplifying the user's understanding and simplifying machine operation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.