2015
DOI: 10.1007/978-3-319-23374-1_19
|View full text |Cite
|
Sign up to set email alerts
|

Learning Spatial Models for Navigation

Abstract: Abstract. Typically, autonomous robot navigation relies on a detailed, accurate map. The associated representations, however, do not readily support human-friendly interaction. The approach reported here offers an alternative: navigation with a spatial model and commonsense qualitative spatial reasoning. Both are based on research about how people experience and represent space. The spatial model quickly develops as the result of incremental learning during travel. In extensive empirical testing, qualitative s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…The context of this work is SemaFORR, a cognitively-based control system for autonomous indoor navigation (Epstein et al, 2015; Korpan, 2019). At decision point d = x, y, θ, V , SemaFORR records the robot's location (x, y), its orientation θ, and its view V , the data from its onboard range finder.…”
Section: Spatial Affordancesmentioning
confidence: 99%
“…The context of this work is SemaFORR, a cognitively-based control system for autonomous indoor navigation (Epstein et al, 2015; Korpan, 2019). At decision point d = x, y, θ, V , SemaFORR records the robot's location (x, y), its orientation θ, and its view V , the data from its onboard range finder.…”
Section: Spatial Affordancesmentioning
confidence: 99%
“…Here we elaborate on the ability of the framework to explain, a feature of our methodology that we have not previously described in detail. ArgHRI consists of a dialogue manager [2], which interacts with human users and employs the ArgTrust [32,33] engine to perform reasoning, and a robot controller, which employs the HRTeam framework [9,28,31] for managing robot behaviour and interacting with physical or simulated robots. The dialogue manager controls all human-robot dialogues and dialogue-related events, including selection of appropriate argumentation-based dialogue type (illustrated in Figure 1), dialogue move generation (an example is shown in Figure 2), translation into scripted chat-style content for presentation to humans (examples are contained in Figures 4, 5 and 6) and maintenance of dialogue history.…”
Section: Our Approach: Arghri Framework and Experimental Setupmentioning
confidence: 99%