The visual cortex analyzes motion information along hierarchically arranged visual areas that interact through bidirectional interconnections. This work suggests a bio-inspired visual model focusing on the interactions of the cortical areas in which a new mechanism of feedforward and feedback processing are introduced. The model uses a neuromorphic vision sensor (silicon retina) that simulates the spike-generation functionality of the biological retina. Our model takes into account two main model visual areas, namely V1 and MT, with different feature selectivities. The initial motion is estimated in model area V1 using spatiotemporal filters to locally detect the direction of motion. Here, we adapt the filtering scheme originally suggested by Adelson and Bergen to make it consistent with the spike representation of the DVS. The responses of area V1 are weighted and pooled by area MT cells which are selective to different velocities, i.e. direction and speed. Such feature selectivity is here derived from compositions of activities in the spatio-temporal domain and integrating over larger space-time regions (receptive fields). In order to account for the bidirectional coupling of cortical areas we match properties of the feature selectivity in both areas for feedback processing. For such linkage we integrate the responses over different speeds along a particular preferred direction. Normalization of activities is carried out over the spatial as well as the feature domains to balance the activities of individual neurons in model areas V1 and MT. Our model was tested using different stimuli that moved in different directions. The results reveal that the error margin between the estimated motion and synthetic ground truth is decreased in area MT comparing with the initial estimation of area V1. In addition, the modulated V1 cell activations shows an enhancement of the initial motion estimation that is steered by feedback signals from MT cells.
Leap Motion Controller (LMC) is a gesture sensor consists of three infrared light emitters and two infrared stereo cameras as tracking sensors. LMC translates hand movements into graphical data that are used in a variety of applications such as virtual/augmented reality and object movements control. In this work, we intend to control the movements of a prosthetic hand via (LMC) in which fingers are flexed or extended in response to hand movements. This will be carried out by passing in the data from the Leap Motion to a processing unit that processes the raw data by an open-source package (Processing i3) in order to control five servo motors using a micro-controller board. In addition, haptic setup is proposed using force sensors (FSR) and vibro-motors in which the speed of these motors is proportional to the amount of the grasp force exerted by the prosthetic hand. Investigation for optimal placement of the FSRs on a prosthetic hand to obtain convenient haptic feedback has been carried out. The results show the effect of object shape and weight on the obtained response of the FSR and how they influence the locations of the sensors.
In this paper, we propose a new bio-inspired approach for motion estimation using a Dynamic Vision Sensor (DVS) (Lichtsteiner et al., 2008), where an event-based-temporal window accumulation is introduced. This format accumulates the activity of the pixels over a short time, i.e. several µs. The optic flow is estimated by a new neural model mechanism which is inspired by the motion pathway of the visual system and is consistent with the vision sensor functionality, where new temporal filters are proposed. Since the DVS already generates temporal derivatives of the input signal, we thus suggest a smoothing temporal filter instead of biphasic temporal filters that introduced by (Adelson and Bergen, 1985). Our model extracts motion information via a spatiotemporal energy mechanism which is oriented in the space-time domain and tuned in spatial frequency. To achieve balanced activities of individual cells against the neighborhood activities, a normalization process is carried out. We tested our model using different kinds of stimuli that were moved via translatory and rotatory motions. The results highlight an accurate flow estimation compared with synthetic ground truth. In order to show the robustness of our model, we examined the model by probing it with synthetically generated ground truth stimuli and realistic complex motions, e.g. biological motions and a bouncing ball, with satisfactory results. 335 Abdul-Kreem L. and Neumann H.. Bio-inspired Model for Motion Estimation using an Address-event Representation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.