Symposium on Spatial User Interaction 2020
DOI: 10.1145/3385959.3418459
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Generation of Spatial Tactile Effects by Analyzing Cross-modality Features of a Video

Abstract: Tactile effects can enhance user experience of multimedia content. However, generating appropriate tactile stimuli without any human intervention remains a challenge. While visual or audio information has been used to automatically generate tactile effects, utilizing cross-modal information may further improve the spatiotemporal synchronization and user experience of the tactile effects. In this paper, we present a pipeline for automatic generation of vibrotactile effects through the extraction of both the vis… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 28 publications
0
2
0
Order By: Relevance
“…These vehicle behaviors probably effect pedestrians' anticipation of the vehicle's intentions. Other scholars [20,27] suggested that physical modality can also be a kind of haptic feedback, such as a mobile phone vibration [28] or haptic actuator. However, our goal is to design vehicle-based eHMI which means these interfaces involve placing cues on the vehicle only.…”
Section: Physical Modalitymentioning
confidence: 99%
“…These vehicle behaviors probably effect pedestrians' anticipation of the vehicle's intentions. Other scholars [20,27] suggested that physical modality can also be a kind of haptic feedback, such as a mobile phone vibration [28] or haptic actuator. However, our goal is to design vehicle-based eHMI which means these interfaces involve placing cues on the vehicle only.…”
Section: Physical Modalitymentioning
confidence: 99%
“…The visualization community has demonstrated ample applications that support multimodal data exploration with touch and speech (e.g., [35,[74][75][76]). In similar vein, we plan to build on work in tactile displays [85] to surface the visual content in the video. While consuming video with tactile displays may be challenging, editing video may benefit from providing creators access to slow frame-by-frame content (e.g., to assess when a person moves out of the frame) and waveform visualizations.…”
Section: Discussion and Future Workmentioning
confidence: 99%