2022
DOI: 10.1002/aisy.202200168
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Planning with Deep Reinforcement Learning for 3D Navigation of Microrobots in Blood Vessels

Abstract: Designing intelligent microrobots that can autonomously navigate and perform instructed routines in blood vessels, a crowded environment with complexities including Brownian disturbance, concentrated cells, confinement, different flow patterns, and diverse vascular geometries, can offer enormous opportunities and challenges in biomedical applications. Herein, a biological-agent mimicking a hierarchical control scheme that enables a microrobot to efficiently navigate and execute customizable routines in simplif… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 40 publications
0
4
0
Order By: Relevance
“…Computationally, new numerical and theoretical models of micro/nanorobotic swarms have accelerated the design of guidance strategies. For instance, simulations on swarm generation and motion ,,,,, and swarm decision-making in completing cargo transportation in complex environments , have been performed, which validated the proposed models and elucidated the underlying swarm mechanisms. Moreover, through the implementation of machine learning and control loops into the guidance strategies, automated swarm control was realized to precisely adjust the swarm pattern and motion according to environmental changes and task requirements (Figure C). ,, In the development of automated swarm control, the underlying physical mechanisms of swarms must be taken into account.…”
Section: Discussionmentioning
confidence: 88%
“…Computationally, new numerical and theoretical models of micro/nanorobotic swarms have accelerated the design of guidance strategies. For instance, simulations on swarm generation and motion ,,,,, and swarm decision-making in completing cargo transportation in complex environments , have been performed, which validated the proposed models and elucidated the underlying swarm mechanisms. Moreover, through the implementation of machine learning and control loops into the guidance strategies, automated swarm control was realized to precisely adjust the swarm pattern and motion according to environmental changes and task requirements (Figure C). ,, In the development of automated swarm control, the underlying physical mechanisms of swarms must be taken into account.…”
Section: Discussionmentioning
confidence: 88%
“…Considering the promising success demonstrated by RL algorithms in a wide range of robotic control problems [ 16 , 17 ] as well as the inherent complexity of environments in which microrobots operate, recent studies have focused on the use of RL algorithms in microrobot control and navigation. For instance, Q-learning as a value-based RL algorithm has been employed to learn appropriate policies for movement in steady flow [ 18 ], optimization of the propulsion policies for three-sphere microswimmers [ 19 ], motion control in simulated blood vessels environments [ 20 ], and support microrobot real-time path planning in navigation [ 21 ]. However, Q-learning not only lacks performance efficiency in systems with continuous action spaces [ 22 ] but also exhibits a high degree of sample inefficiency [ 23 , 24 ].…”
Section: Introductionmentioning
confidence: 99%
“…[ 25 ] Yang et al deployed a hierarchical planning to navigate microrobot in blood vessels, where the high‐level controller set up short‐term targets along the desired path, and the deep RL output rotation sequence to approach the targets. [ 26 ] Nonetheless, these machine‐learning‐based works only generated near‐optimal trajectories for the targets but did not consider improving the IC of the tasks.…”
Section: Introductionmentioning
confidence: 99%