2020
DOI: 10.1142/s0129065720500458
|View full text |Cite
|
Sign up to set email alerts
|

Neurolight: A Deep Learning Neural Interface for Cortical Visual Prostheses

Abstract: Visual neuroprosthesis, that provide electrical stimulation along several sites of the human visual system, constitute a potential tool for vision restoration for the blind. Scientific and technological progress in the fields of neural engineering and artificial vision comes with new theories and tools that, along with the dawn of modern artificial intelligence, constitute a promising framework for the further development of neurotechnology. In the framework of the development of a Cortical Visual Neuroprosthe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
2
1

Relationship

1
9

Authors

Journals

citations
Cited by 41 publications
(21 citation statements)
references
References 58 publications
0
15
0
Order By: Relevance
“…The visual encoder system has been described elsewhere (12,13,(55)(56)(57)(58)(59). In brief it consisted of a video camera attached to an eyeglass frame for image acquisition using head scanning, and custom hardware/software which performs a real-time analysis of the light patterns that are received by the light sensors in the camera and a multichannel spatio-temporal filtering of the visual information to extract and enhance the most relevant features of the scene.…”
Section: Bio-inspired Retinal Encodermentioning
confidence: 99%
“…The visual encoder system has been described elsewhere (12,13,(55)(56)(57)(58)(59). In brief it consisted of a video camera attached to an eyeglass frame for image acquisition using head scanning, and custom hardware/software which performs a real-time analysis of the light patterns that are received by the light sensors in the camera and a multichannel spatio-temporal filtering of the visual information to extract and enhance the most relevant features of the scene.…”
Section: Bio-inspired Retinal Encodermentioning
confidence: 99%
“…Recently, machine-learning algorithms, in particular deep neural networks (Díaz-Vico et al, 2020;Lara-Benıteza et al, 2020;Lozano et al, 2020;Yang et al, 2019), have been used for damage detection of structures (Ni et al, 2020;Wu et al, 2019). Abdeljaber et al (2018)…”
Section: Vibration-based Structural Health Monitoringmentioning
confidence: 99%
“…The translation of complex visual input into a phosphene percept (which by definition is limited) requires an efficient reduction of information and selection of the mere essential visual features for a given task. This can be achieved with the use of traditional computer vision approaches, such as edge detection ( Boyle, Maeder, & Boles, 2001 ; Dowling, Maeder, & Boles, 2004 ; Guo, Yang, & Gao, 2018 ), but deep neural network models have also gained increasing interest of prosthetic engineers (e.g., Sanchez-Garcia, Martinez-Cantin, & Guerrero, 2020 ; Han et al, 2021 ; Bollen et al, 2019 ; Bollen, van Wezel, van Gerven, & Güçlütürk, 2019 ; De Ruyter Van Steveninck, Güçlü, van Wezel, & Van Gerven, 2020 ; Lozano et al, 2020 ; Lozano et al, 2018 ). Various image processing approaches have been proposed for mobility in particular ( Barnes et al, 2011 ; Dagnelie et al, 2007 ; Dowling, Boles, & Maeder, 2006 ; Dowling, Maeder, & Boles, 2004 ; Feng & McCarthy, 2013 ; McCarthy et al, 2015 ; McCarthy, Feng, & Barnes, 2013 ; Parikh, Itti, Humayun, & Weiland, 2013 ; Srivastava, Troyk, & Dagnelie, 2009 ; van Rheede, Kennard, & Hicks, 2010 ; Vergnieux, Mace, & Jouffrais, 2014 ; Vergnieux, Macé, & Jouffrais, 2017 ; Zapf, Boon, Lovell, & Suaning, 2016 ).…”
Section: Introductionmentioning
confidence: 99%