“…In Ref. [ 89 ], the authors showed the feasibility of hand gesture recognition using electromagnetic waves and machine learning. The authors in Ref.…”
Audio-visual speech recognition (AVSR) is one of the most promising solutions for reliable speech recognition, particularly when audio is corrupted by noise. Additional visual information can be used for both automatic lip-reading and gesture recognition. Hand gestures are a form of non-verbal communication and can be used as a very important part of modern human–computer interaction systems. Currently, audio and video modalities are easily accessible by sensors of mobile devices. However, there is no out-of-the-box solution for automatic audio-visual speech and gesture recognition. This study introduces two deep neural network-based model architectures: one for AVSR and one for gesture recognition. The main novelty regarding audio-visual speech recognition lies in fine-tuning strategies for both visual and acoustic features and in the proposed end-to-end model, which considers three modality fusion approaches: prediction-level, feature-level, and model-level. The main novelty in gesture recognition lies in a unique set of spatio-temporal features, including those that consider lip articulation information. As there are no available datasets for the combined task, we evaluated our methods on two different large-scale corpora—LRW and AUTSL—and outperformed existing methods on both audio-visual speech recognition and gesture recognition tasks. We achieved AVSR accuracy for the LRW dataset equal to 98.76% and gesture recognition rate for the AUTSL dataset equal to 98.56%. The results obtained demonstrate not only the high performance of the proposed methodology, but also the fundamental possibility of recognizing audio-visual speech and gestures by sensors of mobile devices.
“…In Ref. [ 89 ], the authors showed the feasibility of hand gesture recognition using electromagnetic waves and machine learning. The authors in Ref.…”
Audio-visual speech recognition (AVSR) is one of the most promising solutions for reliable speech recognition, particularly when audio is corrupted by noise. Additional visual information can be used for both automatic lip-reading and gesture recognition. Hand gestures are a form of non-verbal communication and can be used as a very important part of modern human–computer interaction systems. Currently, audio and video modalities are easily accessible by sensors of mobile devices. However, there is no out-of-the-box solution for automatic audio-visual speech and gesture recognition. This study introduces two deep neural network-based model architectures: one for AVSR and one for gesture recognition. The main novelty regarding audio-visual speech recognition lies in fine-tuning strategies for both visual and acoustic features and in the proposed end-to-end model, which considers three modality fusion approaches: prediction-level, feature-level, and model-level. The main novelty in gesture recognition lies in a unique set of spatio-temporal features, including those that consider lip articulation information. As there are no available datasets for the combined task, we evaluated our methods on two different large-scale corpora—LRW and AUTSL—and outperformed existing methods on both audio-visual speech recognition and gesture recognition tasks. We achieved AVSR accuracy for the LRW dataset equal to 98.76% and gesture recognition rate for the AUTSL dataset equal to 98.56%. The results obtained demonstrate not only the high performance of the proposed methodology, but also the fundamental possibility of recognizing audio-visual speech and gestures by sensors of mobile devices.
“…The radar was placed on a table (Figure 2), which corresponds to the coffee table (3a), decorative object (3b), and the couch armrest (3c) scenarios illustrated in Figure 1. Other locations may need adaptations of our simple gesture recognition pipeline, including special preprocessing of the raw signal and recognition techniques [37]. Figure 3 presents θ −R images obtained from the Walabot radar when placed in various locations corresponding to the scenarios from Figure 1 and various types of occlusion.…”
Section: Preliminary Prototype Findings and Future Workmentioning
We address gesture input for TV control, for which we examine mid-air free-hand interactions that can be detected via radar sensing. We adopt a scenario-based design approach to explore possible locations from the living room where to integrate radar sensors, e.g., in the TV set, the couch armrest, or the user's smartphone, and we contribute a four-level taxonomy of locations relative to the TV set, the user, personal robot assistants, and the living room environment, respectively. We also present preliminary results about an interactive system using a 15-antenna ultra-wideband 3D radar, for which we implemented a dictionary of six directional swipe gestures for the control of dichotomous TV system functions.
CCS CONCEPTS• Human-centered computing → Gestural input; Interface design prototyping; Scenario-based design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.