In these experiments, two plates were grasped between the thumb and-forefinger and-squeezed together along a linear track. An electromechanical system presented a constant resistance force during the squeeze up to a predetermined location on the track, whereupon the force effectively went to infinity (simulating a wall) or to zero (simulating a cliff). The task of the subject was to discriminate between two alternative levels of the constant resistanceforce~areference level and a reference-plus-increment level). Results of these experiments indicate a just noticeable difference of roughly 7% of the reference force using a one-interval paradigm with trial-by-trial feedback over the ranges 2.5 F 0 10.0newtons, 5 D 30 mm, 45 S 125 mm, and 25s V 160 mmlsec, where F 0 is the reference force, D is the distance squeezed, S is the initial fingerspan, and V is the mean velocity of the squeeze. These results, based on tests with 5 subjects, are consistent with a wide range of previous results, some of which are associated with other body surfaces and muscle systems and many of which were obtained with different psychophysical methods. This is the second in a series of papers concerned with the manual perception of objects, and, more specifically, with the ability to distinguish between different objects manually (i.e., with manual resolution). In the first paper, we reported the results of a variety of experiments in which the subject was required to discriminate or identify object length by means of the finger-span method (Durlach et al., 1989). In these experiments, a rigid object was grasped between the terminal pads of the thumb and forefinger, and object length was estimated by sensing the differential position of these pads. In the present series of experiments, an object was again grasped between the terminal pads of the thumb and forefinger; in this case, however, the object was not rigid, and the task was to squeeze the object and estimate the resistance force. The experimental apparatus was designed in such a way that the force was constant over the displacement resulting from the squeeze, the force was varied between squeezes, and the task was to discriminate between two alternative levels of the force. In the length-resolution task, the response is derived from estimates of finger position. In the current task, the response is derived from estimates of finger force. In both cases, perceptual cues This work wassupported by ONR Grants N00014-88-K-0338, N00014-89-J-3247, and N00014-90-J-1935, NJ}l Grant 2-R01-DC00126-11, Fairchild foundation funds awarded to X. D. Pang, and a Chu Fellowship to H. Z. Tan. The support and contributions of John Hollerbach at all levels of this work are greatly appreciated. We would also like to thank Bill Rabinowitz for his help with the instrumentation and other aspects ofthe work. We appreciate the contributions of Younes Borki, Lorraine Delhorne, Mary Hou, and Mandayain Srinivasan. are available from both the cutaneous sensory system and the kinesthetic/proprioceptive sensory sys...
In these experiments, two plates were grasped between the thumb and the index finger and squeezed together along a linear track The force resisting the squeeze, produced by an electromechanical system under computer control, was programmed to be either constant (in the case of the force discrimination experiments) or linearly increasing (in the case of the compliance discrimination experiments) over the squeezing displacement, After completing a set of basic psychophysical experiments on compliance resolution (Experiment I), we performed further experiments to investigate whether work and/or terminal-force cues played a role in compliance discrimination, In Experiment 2, compliance and force discrimination experiments were conducted with a roving-displacement paradigm to dissociate work cues (and terminal-force cues for the compliance experiments) from compliance and force cues, respectively. The effect of trial-by-trial feedback on response strategy was also investigated. In Experiment 3, compliance discrimination experiments were conducted with work cues totally eliminated and terminal-force cues greatly reduced. Our results suggest that people tend to use mechanical work and force cues for compliance discrimination, When work and terminal-force cues were dissociated from compliance cues, compliance resolution was poor (22%) relative to force and length resolution. When work cues were totally eliminated, performance could be predicted from terminal-force cues. A parsimonious description of all data from the compliance experiments is that subjects discriminated compliance on the basis ofterminal force.To a first approximation, the mechanical behavior of all deformable solid objects can be expressed as! = F' . + Kx + Bx + mx, which represents the relationship between the total force (f) applied on the object and the corresponding displacement (x), velocity (x), and acceleration (i); the frictional force (F'.), linear stiffness (K), viscosity (B), and mass (M) are the physical parameters that distinguish one object from another (we use lowercase letters for variables and uppercase for parameters). It is our goal to study manual resolution ofall these physical variables and parameters and to provide basic psychophysical information that can be used to (1) advance our understanding of manual perception of object properties, (2) guide the development of design specifications for haptic interfaces that not only sense position and force commands from the human operator but also display such information to the operator in teleoperation and virtual environment systems (see, e.g., the recently published book on systems of this type edited by Durlach & Mavor, 1994), and (3) improve the design of autonomous robots that must make use of manual sensing and manipulation. This is the third in a series ofpapers concerned with how individual physical properties of objects are perceived. In the first paper of this series (Durlach et al., 1989), we reported the results ofa variety ofexperiments in which the subject was required t...
One challenge in multimodal interface research is the lack of robust subsystems that support multimodal interactions. By focusing on a chair-an object that is involved in virtually all human-computer interactions, the sensing chair project enables an ordinary office chair to become aware of its occupant's actions and needs. Surface-mounted pressure distribution sensors are placed over the seatpan and backrest of the chair for real time capturing of contact information between the chair and its occupant. Given the similarity between a pressure distribution map and a grayscale image, pattern recognition techniques commonly used in computer and robot vision, such as principal components analysis, have been successfully applied to solving the problem of sitting posture classification. The current static posture classification system operates in real time with an overall classification accuracy of 96% and 79% for familiar (people it had felt before) and unfamiliar users, respectively. Future work is aimed at a dynamic posture tracking system that continuously tracks not only steady-state (static) but transitional (dynamic) sitting postures. Results reported here form important stepping stones toward an intelligent chair that can find applications in many areas including multimodal interfaces, intelligent environment, and safety of automobile operations.
Objective: This study examined the effectiveness of rear-end collision warnings presented in different sensory modalities while drivers were engaged in cell phone conversations in a driving simulator. Background: Tactile and auditory collision warnings have been shown to improve braking response time (RT) in rear-end collision situations. However, it is not clear how effective these warnings are when the driver is engaged in attentionally demanding secondary tasks, such as talking on a cell phone. Method: Sixteen participants in a driving simulator experienced three collision warning conditions (none, tactile, and auditory) in three conversation conditions (none, simple hands free, complex hands free). Driver RT was captured from warning onset to brake initiation (WON2B). Results: WON2B times for auditory warnings were significantly larger for simple conversations compared with no conversation (+148 ms), whereas there was no significant difference between these conditions for tactile warnings (+53 ms). For complex conversations, WON2B times for both tactile (+146 ms) and auditory warnings (+221 ms) were significantly larger than during no conversation. During complex conversations, tactile warnings produced significantly shorter WON2B times than no warning (-141 ms). Conclusion: Tactile warnings are more effective than auditory warnings during both simple and complex conversations. Application: These results indicate that tactile rear-end collision warnings have the potential to offset some of the driving impairments caused by cell phone conversations.
Abstract-Various psychophysical methods have been used to study human haptic perception, although the selection of a particular method is often based on convention, rather than an analysis of which technique is optimal for the question being addressed. In this review, classical psychophysical techniques used to measure sensory thresholds are described as well as more modern methods such as adaptive procedures and those associated with signal detection theory. Details are provided as to how these techniques should be implemented to measure absolute and difference thresholds and factors that influence subjects' responses are noted. In addition to the methods used to measure sensory thresholds, the techniques available for measuring the perception of suprathreshold stimuli are presented. These scaling methods are reviewed in the context of the various stimulus and response biases that influence how subjects respond to stimuli. The importance of understanding the factors that influence perceptual processing is highlighted throughout the review with reference to experimental studies of haptic perception.
A large body of research now supports the claim that two different and dissociable processes are involved in making numerosity judgments regarding visual stimuli: subitising (fast and nearly errorless) for up to 4 stimuli, and counting (slow and error-prone) when more than 4 stimuli are presented. We studied tactile numerosity judgments for combinations of 1-7 vibrotactile stimuli presented simultaneously over the body surface. In experiment 1, the stimuli were presented once, while in experiment 2 conditions of single presentation and repeated presentation of the stimulus were compared. Neither experiment provided any evidence for a discontinuity in the slope of either the RT or error data suggesting that subitisation does not occur for tactile stimuli. By systematically varying the intensity of the vibrotactile stimuli in experiment 3, we were able to demonstrate that participants were not simply using the 'global intensity' of the whole tactile display to make their tactile numerosity judgments, but were, instead, using information concerning the number of tactors activated. The results of the three experiments reported here are discussed in relation to current theories of counting and subitising, and potential implications for the design of tactile user interfaces are highlighted.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.