2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) 2019
DOI: 10.1109/embc.2019.8856541
|View full text |Cite
|
Sign up to set email alerts
|

Deep reinforcement learning for task-based feature learning in prosthetic vision

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 6 publications
0
7
0
Order By: Relevance
“…Future work could extend the current approach with other or more complex tasks. For instance, with reinforcement learning strategies (see White et al., 2019 ), the model could be extended to perform tasks that more closely related to the everyday actions that need to be performed by the end-user, such as object manipulation ( Levine, Finn, Darrell, & Abbeel, 2015 ) or object avoidance ( LeCun, Muller, Ben, Cosatto, & Flepp, 2005 ).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Future work could extend the current approach with other or more complex tasks. For instance, with reinforcement learning strategies (see White et al., 2019 ), the model could be extended to perform tasks that more closely related to the everyday actions that need to be performed by the end-user, such as object manipulation ( Levine, Finn, Darrell, & Abbeel, 2015 ) or object avoidance ( LeCun, Muller, Ben, Cosatto, & Flepp, 2005 ).…”
Section: Discussionmentioning
confidence: 99%
“…Second, as a result of practical, medical, or biophysical limitations of the neural interface, one might want to tailor the stimulation parameters to additional constraints. Recent work on task-based feature learning for prosthetic vision suggests that deep learning models can be used to overcome such issues ( White, Kameneva, & McCarthy, 2019 ).…”
Section: Introductionmentioning
confidence: 99%
“…Some researchers also built 3D virtual scenes for obstacle avoidance experiments. 56 Tables 1 and 2 summarize more details of the above studies in terms of algorithms and simulation experiments.…”
Section: Methodsmentioning
confidence: 99%
“…Moreover, the sparse computation of unary potentials improved the speed of semantic labeling to ensure that it could run on wearable auxiliary devices in real time. White et al 56 focused on basic orientation and mobility and adopted the deep reinforcement learning (DRL) algorithm to learn visual features by completing visual tasks. To solve the DRL navigation problem in a 3D environment, a new model of learning visual features through task-based simulation was proposed by using an improved version of Asynchronous Advantage Act-Critic (A3C) algorithm, 57 and the learned features were directly converted into real RGB-D images.…”
Section: Methodsmentioning
confidence: 99%
“…One way to address this issue is to include an auxiliary output unrelated to the main task [21]. Such an auxiliary output, for example from a secondary depth prediction task, has been shown to improve navigation of DRL agents in complex environments [22] as well as learning saliency maps for prosthetic vision [23]. Previous work used an image reconstruction task from phosphene patterns to optimize prosthetic vision [17].…”
Section: Related Workmentioning
confidence: 99%