2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018
DOI: 10.1109/iros.2018.8594258
|View full text |Cite
|
Sign up to set email alerts
|

Human Motion Prediction Under Social Grouping Constraints

Abstract: Accurate long-term prediction of human motion in populated spaces is an important but difficult task for mobile robots and intelligent vehicles. What makes this task challenging is that human motion is influenced by a large variety of factors including the person's intention, the presence, attributes, actions, social relations and social norms of other surrounding agents, and the geometry and semantics of the environment. In this paper, we consider the problem of computing human motion predictions that account… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 30 publications
(31 citation statements)
references
References 24 publications
0
28
0
Order By: Relevance
“…obstacle-aware methods , which account for the presence of individual static obstacles (e.g., Alahi et al, 2016; Althoff et al, 2008b; Bera et al, 2016; Elfring et al, 2014; Ferrer and Sanfeliu, 2014; Rehder and Klöden, 2015; Trautman and Krause, 2010; Vemula et al, 2017); 3.3. map-aware methods , which account for environment geometry and topology (e.g., Chen et al, 2017; Chung and Huang, 2010, 2012; Gong et al, 2011; Henry et al, 2010; Ikeda et al, 2012; Kooij et al, 2019; Liao et al, 2003; Pfeiffer et al, 2016; Pool et al, 2017; Rösmann et al, 2017; Rudenko et al, 2017, 2018b; Vasquez, 2016; Yen et al, 2008; Ziebart et al, 2009); 3.4. semantics-aware methods , which additionally account for environment semantics or affordances such as no-go zones, crosswalks, sidewalks, or traffic lights (e.g., Ballan et al, 2016; Coscia et al, 2018; Karasev et al, 2016; Kitani et al, 2012; Kuhnt et al, 2016; Lee et al, 2017; Ma et al, 2017; Rehder et al, 2018; Zheng et al, 2016).…”
Section: Taxonomymentioning
confidence: 99%
See 3 more Smart Citations
“…obstacle-aware methods , which account for the presence of individual static obstacles (e.g., Alahi et al, 2016; Althoff et al, 2008b; Bera et al, 2016; Elfring et al, 2014; Ferrer and Sanfeliu, 2014; Rehder and Klöden, 2015; Trautman and Krause, 2010; Vemula et al, 2017); 3.3. map-aware methods , which account for environment geometry and topology (e.g., Chen et al, 2017; Chung and Huang, 2010, 2012; Gong et al, 2011; Henry et al, 2010; Ikeda et al, 2012; Kooij et al, 2019; Liao et al, 2003; Pfeiffer et al, 2016; Pool et al, 2017; Rösmann et al, 2017; Rudenko et al, 2017, 2018b; Vasquez, 2016; Yen et al, 2008; Ziebart et al, 2009); 3.4. semantics-aware methods , which additionally account for environment semantics or affordances such as no-go zones, crosswalks, sidewalks, or traffic lights (e.g., Ballan et al, 2016; Coscia et al, 2018; Karasev et al, 2016; Kitani et al, 2012; Kuhnt et al, 2016; Lee et al, 2017; Ma et al, 2017; Rehder et al, 2018; Zheng et al, 2016).…”
Section: Taxonomymentioning
confidence: 99%
“…3.3. map-aware methods , which account for environment geometry and topology (e.g., Chen et al, 2017; Chung and Huang, 2010, 2012; Gong et al, 2011; Henry et al, 2010; Ikeda et al, 2012; Kooij et al, 2019; Liao et al, 2003; Pfeiffer et al, 2016; Pool et al, 2017; Rösmann et al, 2017; Rudenko et al, 2017, 2018b; Vasquez, 2016; Yen et al, 2008; Ziebart et al, 2009);…”
Section: Taxonomymentioning
confidence: 99%
See 2 more Smart Citations
“…In addition to basic trajectory data, state-of-the-art methods for tracking or motion prediction, for instance, can also incorporate information about the environment, social grouping, head orientation or personal traits. For instance, Lau et al [13] estimate social grouping formations during tracking and Rudenko et al [20] use group affiliation as a contextual cue to predict future motion. Unhelkar et al [24] use head orientation to disambiguate and recognize typical motion patterns that people are following.…”
Section: Related Workmentioning
confidence: 99%