2022
DOI: 10.1167/jov.22.2.20
|View full text |Cite
|
Sign up to set email alerts
|

End-to-end optimization of prosthetic vision

Abstract: Neural prosthetics may provide a promising solution to restore visual perception in some forms of blindness. The restored prosthetic percept is rudimentary compared to normal vision and can be optimized with a variety of image preprocessing techniques to maximize relevant information transfer. Extracting the most useful features from a visual scene is a nontrivial task and optimal preprocessing choices strongly depend on the context. Despite rapid advancements in deep learning, research currently faces a diffi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
34
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 25 publications
(52 citation statements)
references
References 61 publications
2
34
0
1
Order By: Relevance
“…4 for a comparison). Note that we do not include an explicit sparsity regularizer in our objective function as in previous work [17]. This observation suggests that optimization for a task may lead to the benefit of sparser representations in addition to providing more informative task-relevant features to the RL agent.…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…4 for a comparison). Note that we do not include an explicit sparsity regularizer in our objective function as in previous work [17]. This observation suggests that optimization for a task may lead to the benefit of sparser representations in addition to providing more informative task-relevant features to the RL agent.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…All convolutional layers use a kernel size of 3, with strides 1 and padding 1. We found RL performance to be more stable when using a Swish activation function after every convolution layer instead of batch normalization and leaky ReLU as in the original implementation [17]. The last convolutional layer is followed by a tanh activation, after which the outputs are scaled to the range [0, 1].…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations