Inspired by biological vision mechanism, event‐based cameras have been developed to capture continuous object motion and detect brightness changes independently and asynchronously, which overcome the limitations of traditional frame‐based cameras. Complementarily, spiking neural networks (SNNs) offer asynchronous computations and exploit the inherent sparseness of spatio‐temporal events. Notably, event‐based pixel‐wise optical flow estimations calculate the positions and relationships of objects in adjacent frames; however, as event camera outputs are sparse and uneven, dense scene information is difficult to generate and the local receptive fields of the neural network also lead to poor moving objects tracking. To address these issues, an improved event‐based self‐attention optical flow estimation network (SA‐FlowNet) that independently uses criss‐cross and temporal self‐attention mechanisms, directly capturing long‐range dependencies and efficiently extracting the temporal and spatial features from the event streams is proposed. In the former mechanism, a cross‐domain attention scheme dynamically fusing the temporal‐spatial features is introduced. The proposed network adopts a spiking‐analogue neural network architecture using an end‐to‐end learning method and gains significant computational energy benefits especially for SNNs. The state‐of‐the‐art results of the error rate for optical flow prediction on the Multi‐Vehicle Stereo Event Camera (MVSEC) dataset compared with the current SNN‐based approaches is demonstrated.
Sign language recognition has been utilized in human–machine interactions, improving the lives of people with speech impairments or who rely on nonverbal instructions. Thanks to its higher temporal resolution, less visual redundancy information and lower energy consumption, the use of an event camera with a new dynamic vision sensor (DVS) shows promise with regard to sign language recognition with robot perception and intelligent control. Although previous work has focused on event camera-based, simple gesture datasets, such as DVS128Gesture, event camera gesture datasets inspired by sign language are critical, which poses a great impediment to the development of event camera-based sign language recognition. An effective method to extract spatio-temporal features from event data is significantly desired. Firstly, the event-based sign language gesture datasets are proposed and the data have two sources: traditional sign language videos to event stream (DVS_Sign_v2e) and DAVIS346 (DVS_Sign). In the present dataset, data are divided into five classification, verbs, quantifiers, position, things and people, adapting to actual scenarios where robots provide instruction or assistance. Sign language classification is demonstrated in spike neuron networks with a spatio-temporal back-propagation training method, leading to the best recognition accuracy of 77%. This work paves the way for the combination of event camera-based sign language gesture recognition and robotic perception for the future intelligent systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.