2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC) 2020
DOI: 10.1109/itsc45102.2020.9294366
|View full text |Cite
|
Sign up to set email alerts
|

Online Monitoring for Safe Pedestrian-Vehicle Interactions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2

Relationship

3
5

Authors

Journals

citations
Cited by 17 publications
(14 citation statements)
references
References 46 publications
0
14
0
Order By: Relevance
“…Recall our motivating example in Section 3.3, we study the Polaris GEM e2 Electric Vehicle and its high-fidelity Gazebo simulation [14].…”
Section: Case Study 1: Vision-based Lane Keeping With Lanenetmentioning
confidence: 99%
See 1 more Smart Citation
“…Recall our motivating example in Section 3.3, we study the Polaris GEM e2 Electric Vehicle and its high-fidelity Gazebo simulation [14].…”
Section: Case Study 1: Vision-based Lane Keeping With Lanenetmentioning
confidence: 99%
“…To prepare the training data for learning A 𝑖 and b 𝑖 to construct R 𝑖 , we use the Gazebo simulator in [14] to generate camera images p labeled with their ground truth percepts z * . Each image is sampled from an uniform distribution D over the subspace X 𝑖 as well as the environment space 𝐸.…”
Section: Tracking Error Function Descriptionmentioning
confidence: 99%
“…A comprehensive understanding of dynamic human environments is essential for autonomous mobile robots to safely and smoothly enter our daily lives [Chen et al, 2017]. The mobile robot needs to effectively encode motion patterns of surrounding pedestrians from observation data, accurately predict their future trajectories, and efficiently plan its own paths for rapid task execution free of safety risks [Ziebart et al, 2009, Du et al, 2020. Social awareness is crucial for robots to achieve such human-centered autonomy, and significant progress has been made to acquire this capability by studying human-human interaction and predicting trajectories of multiple pedestrians [Alahi et al, 2016, Vemula et al, 2018, Gupta et al, 2018, Ivanovic and Pavone, 2019.…”
Section: Introductionmentioning
confidence: 99%
“…As mobile robots are becoming prevalent in people's daily lives, autonomous navigation in crowded places with other dynamic agents is an important yet challenging problem [1], [2]. Inspired by the recent applications of deep learning in robot control [3]- [6] and in graph modeling [7], we seek to build a learning-based graphical model for mobile robot navigation in pedestrian-rich environments.…”
Section: Introductionmentioning
confidence: 99%
“…We present the following contributions: (1) We propose a novel deep neural network architecture called DS-RNN, which enables the robot to perform efficient spatio-temporal reasoning in crowd navigation; (2) We train the network using model-free RL without any supervision, which both simplifies the learning pipeline and avoids the network from converging to a suboptimal policy too early; (3) Our method demonstrates better performance in challenging navigation settings compared with previous methods 1 .…”
Section: Introductionmentioning
confidence: 99%