2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2021
DOI: 10.1109/iros51168.2021.9636759
|View full text |Cite
|
Sign up to set email alerts
|

XAI-N: Sensor-based Robot Navigation using Expert Policies and Decision Trees

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…Once users understand that reasoning, they can use the information to modify, debug, or diagnose the system. XAI can also show users existing policies learned by an algorithm, which the users can then analyze, interpret, and change [15]. Even specific kinds of concerns can be addressed through this type of XAI without having to retrain the system.…”
Section: Explainable Artificial Intelligence a Background Of Xaimentioning
confidence: 99%
See 3 more Smart Citations
“…Once users understand that reasoning, they can use the information to modify, debug, or diagnose the system. XAI can also show users existing policies learned by an algorithm, which the users can then analyze, interpret, and change [15]. Even specific kinds of concerns can be addressed through this type of XAI without having to retrain the system.…”
Section: Explainable Artificial Intelligence a Background Of Xaimentioning
confidence: 99%
“…Roth et al [15] developed "a novel sensor-based learning navigation algorithm to compute a collision-free trajectory for a robot in dense and dynamic environments with moving obstacles or targets." This method relies on an expert policy based on deep reinforcement learning; it is also integrated with a policy extraction technique which generates the learned policies as a decision tree format.…”
Section: Related Researchmentioning
confidence: 99%
See 2 more Smart Citations
“…Learning methods are increasingly being used for robot navigation in indoor and outdoor scenes. These include deep reinforcement learning (DRL) methods [1], [2], [3], [4], [5], [6], [7] and learning from demonstration [8], [9]. These methods have been evaluated in real-world scenarios, including obstacle avoidance, dynamic scenes, and uneven terrains.…”
Section: Introductionmentioning
confidence: 99%