2020
DOI: 10.1177/0278364920961809
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal interaction-aware motion prediction for autonomous street crossing

Abstract: For mobile robots navigating on sidewalks, the ability to safely cross street intersections is essential. Most existing approaches rely on the recognition of the traffic light signal to make an informed crossing decision. Although these approaches have been crucial enablers for urban navigation, the capabilities of robots employing such approaches are still limited to navigating only on streets that contain signalized intersections. In this article, we address this challenge and propose a multimodal convolutio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 29 publications
(11 citation statements)
references
References 48 publications
0
11
0
Order By: Relevance
“…Whereas, when robots and human goals can be explicitly defined in an environment, then planning based approaches are extremely suitable. Few real world experiments (Bartoli et al, 2018;Bera et al, 2016;Radwan et al, 2018;Vasquez, 2016) conducted by different institutes in the domain of human motion prediction are illustrated in Table 4. It reports the study design depicting the number of participants and quantities measured during the experiment.…”
Section: Planning Based Approachesmentioning
confidence: 99%
See 1 more Smart Citation
“…Whereas, when robots and human goals can be explicitly defined in an environment, then planning based approaches are extremely suitable. Few real world experiments (Bartoli et al, 2018;Bera et al, 2016;Radwan et al, 2018;Vasquez, 2016) conducted by different institutes in the domain of human motion prediction are illustrated in Table 4. It reports the study design depicting the number of participants and quantities measured during the experiment.…”
Section: Planning Based Approachesmentioning
confidence: 99%
“…Bera et al (Bera et al, 2016), proposed path prediction based on global and local movement patterns on different videos captured by overhead cameras but their experiment lacked testing their model with the robots. Bartoli et al (Bartoli et al, 2018) and Radwan et al (Radwan et al, 2018) models performed well by combining human motion approaches with context aware mapping by utilizing LSTM sequential model to achieve energy optimization as well as offered better system tolerance to new unseen scenarios.…”
Section: Planning Based Approachesmentioning
confidence: 99%
“…However, temporal fusion has received much lesser attention even though they generate a lot of multimodal temporal data coming from a range of sensors like Camera, LiDAR (Light Detection and Ranging), wheel odometry, etc. Typical driving automation tasks of interest are learning driver behavior [3] and intent [12], motion forecasting [28], [29], object detection [16], learning affordances [30], action regression [15], [31], semantic segmentation [32], [33] among others.…”
Section: B Learning In Autonomous Navigationmentioning
confidence: 99%
“…Also, robots operating in areas reserved for pedestrians such as sidewalks or pedestrian zones, e.g., delivery robots, benefit substantially from precise detections. Such detections, for example, enable delivery robot applications to safely cross a street [11], [14], [15].…”
Section: Introductionmentioning
confidence: 99%