Visual neuroprosthesis, that provide electrical stimulation along several sites of the human visual system, constitute a potential tool for vision restoration for the blind. Scientific and technological progress in the fields of neural engineering and artificial vision comes with new theories and tools that, along with the dawn of modern artificial intelligence, constitute a promising framework for the further development of neurotechnology. In the framework of the development of a Cortical Visual Neuroprosthesis for the blind (CORTIVIS), we are now facing the challenge of developing not only computationally powerful tools and flexible approaches that will allow us to provide some degree of functional vision to individuals who are profoundly blind. In this work, we propose a general neuroprosthesis framework composed of several task-oriented and visual encoding modules. We address the development and implementation of computational models of the firing rates of retinal ganglion cells and design a tool — Neurolight — that allows these models to be interfaced with intracortical microelectrodes in order to create electrical stimulation patterns that can evoke useful perceptions. In addition, the developed framework allows the deployment of a diverse array of state-of-the-art deep-learning techniques for task-oriented and general image pre-processing, such as semantic segmentation and object detection in our system’s pipeline. To the best of our knowledge, this constitutes the first deep-learning-based system designed to directly interface with the visual brain through an intracortical microelectrode array. We implement the complete pipeline, from obtaining a video stream to developing and deploying task-oriented deep-learning models and predictive models of retinal ganglion cells’ encoding of visual inputs under the control of a neurostimulation device able to send electrical train pulses to a microelectrode array implanted at the visual cortex.
Plastic debris constitutes up to 87% of marine litter and represents one of the most frequently studied vectors for marine alien species with invasive potential in the last 15 years. This review addresses an integrated analysis of the different factors involved in the impact of plastic as a vector for the dispersal of marine species. The sources of entry of plastic materials into the ocean are identified as well as how they move between different habitats affecting each trophic level and producing hot spots of plastic accumulation in the ocean. The characterization of plastic as a dispersal vector for marine species has provided information about the inherent properties of plastics which have led to its impact on the ocean: persistence, buoyancy, and variety in terms of chemical composition, all of which facilitate colonization by macro and microscopic species along with its dispersion throughout different oceans and ecosystems. The study of the differences in the biocolonization of plastic debris according to its chemical composition provided fundamental information regarding the invasion process mediated by plastic, and highlighted gaps of knowledge about this process. A wide range of species attached to plastic materials has been documented and the most recurrent phyla found on plastic have been identified from potentially invasive macrofauna to toxic microorganisms, which are capable of causing great damage in places far away from their origin. Plastic seems to be more efficient than the natural oceanic rafts carrying taxa such as Arthropoda, Annelida, and Mollusca. Although the differential colonization of different plastic polymers is not clear, the chemical composition might determine the community of microorganisms, where we can find both pathogens and virulent and antibiotic resistance genes. The properties of plastic allow it to be widely dispersed in practically all ocean compartments, making this material an effective means of transport for many species that could become invasive.
Deep Learning offers flexible powerful tools that have advanced our understanding of the neural coding of neurosensory systems. In this work, a 3D Convolutional Neural Network (3D CNN) is used to mimic the behavior of a population of mice retinal ganglion cells in response to different light patterns. For this purpose, we projected homogeneous RGB flashes and checkerboards stimuli with variable luminances and wavelength spectrum to mimic a more naturalistic stimuli environment onto the mouse retina. We also used white moving bars in order to localize the spatial position of the recorded cells. Then recorded spikes were smoothed with a Gaussian kernel and used as the output target when training a 3D CNN in a supervised way. To find a suitable model, two hyperparameter search stages were performed. In the first stage, a trial and error process allowed us to obtain a system that is able to fit the neurons firing rates. In the second stage, a systematic procedure was used to compare several gradient-based optimizers, loss functions and the model’s convolutional layers number. We found that a three layered 3D CNN was able to predict the ganglion cells firing rates with high correlations and low prediction error, as measured with Mean Squared Error and Dynamic Time Warping in test sets. These models were either competitive or outperformed other models used already in neuroscience, as Feed Forward Neural Networks and Linear-Nonlinear models. This methodology allowed us to capture the temporal dynamic response patterns in a robust way, even for neurons with high trial-to-trial variable spontaneous firing rates, when providing the peristimulus time histogram as an output to our model.
No abstract
This paper proposes a hardware implementation to speed up the calculation of the feature descriptor vector in the Scale-Invariant Feature Transform (SIFT) algorithm. The proposed architecture, which improves conventional solutions based on embedded processors or other hardware/software codesigns, computes a feature descriptor vector of 27 elements from a keypoint neighborhood of 15x15 pixels. This process comprises several steps, including complex operations such as vector normalization operations. The paper compares two different implementations: one being time-optimized and the other memory-optimized. Both approaches require 649 and 874 clock cycles respectively for a single feature vector calculation (6.49 µs and 8.74 µs for a 100 MHz FPGA).
In the last decade, skin color has proven to be a useful cue for recognition and tracking of face and hand, and skin color segmentation has become the first step in several processing tasks. With the aim of overcoming the weak points that existing software solutions show in real time mobile applications, we propose an FPGA-based implementation of a skin classifier. The skin classification algorithm and its hardware architecture are herein described. Results in terms of classification performance, processing rate and hardware resources used are presented.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.