Abstract. We describe a system that can provide combined auditory and haptic sensations that arise while walking on different grounds. The simulation is based on a physical model that drives both haptic transducers embedded in sandals and headphones. The model is able to represent walking interactions with solid surfaces that can creak, be covered with crumpling material. The simulation responds to pressure on the floor by a vibrotactile signal felt by the feet. In a preliminary discrimination experiment, 15 participants were asked to recognize four different surfaces in a list of sixteen possibilities and under three different conditions, haptics only, audition only and combined haptic-audition. The results indicate that subjects are able to recognize most of the stimuli in the audition only condition, and some of the material properties such as hardness in the haptics only condition. The combination of auditory and haptic cues does not significantly improve recognition.
We describe a system which simulates in realtime the auditory and haptic sensations of walking on different surfaces. The system is based on a pair of sandals enhanced with pressure sensors and actuators. The pressure sensors detect the interaction force during walking, and control several physically based synthesis algorithms, which drive both the auditory and haptic feedback. The different hardware and software components of the system are described, together with possible uses and possibilities for improvements in future design iterations.
We describe an audio-haptic experiment conducted using a system which simulates in real-time the auditory and haptic sensation of walking on different surfaces. The system is based on physical models, that drive both the haptic and audio synthesizers, and a pair of shoes enhanced with sensors and actuators. Such experiment was run to examine the ability of subjects to recognize the different surfaces with both coherent and incoherent audio-haptic stimuli. Results show that in this kind of tasks the auditory modality is dominant on the haptic one.
No abstract
The architecture of a sound card can, in simple terms, be described as an electronic board containing a digital bus interface hardware, and analog-to-digital (A/D) and digitalto-analog (D/A) converters; then, a soundcard driver software on a personal computer's (PC) operating system (OS) can control the operation of the A/D and D/A converters on board the soundcard, through a particular bus interface of the PC -acting as an intermediary for high-level audio software running in the PC's OS.This project provides open-source software for a do-ityourself (DIY) prototype board based on a Field-Programmable Gate Array (FPGA), that interfaces to a PC through the USB bus -and demonstrates full-duplex, mono 8-bit/44.1 kHz soundcard operation. Thus, the inclusion of FPGA technology in this paper -along with previous work with discrete part-and microcontroller-based designs -completes an overview of architectures, currently available for DIY implementations of soundcards; serving as a broad introductory tutorial to practical digital audio.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.