The ever-increasing demand for artificial intelligence (AI) systems is underlining a significant requirement for new, AI-optimised hardware. Neuromorphic (brain-like) processors are one highly-promising solution, with photonic-enabled realizations receiving increasing attention. Among these, approaches based upon vertical cavity surface emitting lasers (VCSELs) are attracting interest given their favourable attributes and mature technology. Here, we demonstrate a hardware-friendly neuromorphic photonic spike processor, using a single VCSEL, for all-optical image edge-feature detection. This exploits the ability of a VCSEL-based photonic neuron to integrate temporally-encoded pixel data at high speed; and fire fast (100 ps-long) optical spikes upon detecting desired image features. Furthermore, the photonic system is combined with a software-implemented spiking neural network yielding a full platform for complex image classification tasks. This work therefore highlights the potential of VCSEL-based platforms for novel, ultrafast, all-optical neuromorphic processors interfacing with current computation and communication systems for use in future light-enabled AI and computer vision functionalities.
Taking inspiration from the structure and behaviour of the human visual system and using the Transposed Convolution and Saliency Mapping methods of Convolutional Neural Networks (CNN), a spiking event-based image segmentation algorithm, SpikeSEG is proposed. The approach makes use of both spike-based imaging and spike-based processing, where the images are either standard images converted to spiking images or they are generated directly from a neuromorphic event driven sensor, and then processed using a spiking fully convolutional neural network. The spiking segmentation method uses the spike activations through time within the network to trace back any outputs from saliency maps, to the exact pixel location. This not only gives exact pixel locations for spiking segmentation, but with low latency and computational overhead. SpikeSEG is the first spiking event-based segmentation network and over three experiment test achieves promising results with 96% accuracy overall and a 74% mean intersection over union for the segmentation, all within an event by event-based framework.
This paper proposes a low budget solution to detect and possibly track space debris and satellites in Low Earth Orbit. The concept consists of a space-borne radar installed on a cubeSat flying at low altitude and detecting the occultations of radio signals coming from existing satellites flying at higher altitudes. The paper investigates the feasibility and performance of such a passive bistatic radar system. Key performance metrics considered in this paper are: the minimum size of detectable objects, considering visibility and frequency constraints on existing radio sources, the receiver size, and the compatibility with current cubeSat's technology. Different illuminator types and receiver altitudes are considered under the assumption that all illuminators and receivers are on circular orbits.
The Dynamic Vision Sensor (DVS) has many attributes, such as sub-millisecond response time along with a good low light dy-namic range, that allows it to be well suited to the task for UAV De-tection. This paper proposes a system that exploits the features of an event camera solely for UAV detection while combining it with a Spik-ing Neural Network (SNN) trained using the unsupervised approach of Spike Time-Dependent Plasticity (STDP), to create an asynchronous, low power system with low computational overhead. Utilising the unique features of both the sensor and the network, this result in a system that is robust to a wide variety in lighting conditions, has a high temporal resolution, propagates only the minimal amount of information through the network, while training using the equivalent of 43,000 images. The network returns a 92% detection rate when shown other objects and can detect a UAV with less than 1% of pixels on the sensor being used for processing.
Spiking neural networks (SNNs) are largely inspired by biology and neuroscience, and leverage ideas and theories to create fast and efficient learning systems. Spiking neuron models are adopted as core processing units in neuromorphic systems because they enable event-based processing. The integrate-and-fire (I\&F) models are often adopted as considered more suitable, with the simple Leaky I\&F (LIF) being the most used. The reason for adopting such models is their efficiency or biological plausibility. Nevertheless, rigorous justification for the adoption of LIF over other neuron models for use in artificial learning systems has not yet been studied. This work considers a variety of neuron models in the literature and then selects computational neuron models that are single-variable, efficient, and display different types of complexities. From this selection, we make a comparative study of three simple I\&F neuron models, namely the LIF, the Quadratic I\&F (QIF) and the Exponential I\&F (EIF), to understand whether the use of more complex models increases the performance of the system and whether the choice of a neuron model can be directed by the task to be completed. Neuron models are tested within an SNN trained with Spike-Timing Dependent Plasticity (STDP) on a classification task on the N-MNIST and DVS Gestures datasets. Experimental results reveal that more complex neurons manifest the same ability as simpler ones to achieve high levels of accuracy on a simple dataset (N-MNIST), albeit requiring comparably more hyper-parameter tuning. However, when the data possess richer spatio-temporal features, the QIF and EIF neuron models steadily achieve better results. This suggests that accurately selecting the model based on the richness of the feature spectrum of the data could improve the performance of the whole system. Finally, the code implementing the spiking neurons in the SpykeTorch framework is made publicly available.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.