2002
DOI: 10.1016/s0921-8890(02)00166-5
|View full text |Cite
|
Sign up to set email alerts
|

Mobile robot programming using natural language

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
52
0
1

Year Published

2005
2005
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 120 publications
(53 citation statements)
references
References 5 publications
0
52
0
1
Order By: Relevance
“…In this context, two domains of interaction that humans exploit with great fidelity are spoken language, and the visual ability to observe and understand intentional action. A good deal of research effort has been dedicated to the specification and implementation of spoken language systems for human-robot interaction (Crangle & Suppes 1994, Lauria et al 2002, Severinson-Eklund 2003, Kyriacou et al 2005, Mavrides & Roy 2006. The research described in the current chapter extends these approaches with a Spoken Language Programming system that allows a more detailed specification of conditional execution, and by using language as a compliment to vision-based action perception as a mechanism for indicating how things are to be done, in the context of cooperative, turn-taking behavior.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…In this context, two domains of interaction that humans exploit with great fidelity are spoken language, and the visual ability to observe and understand intentional action. A good deal of research effort has been dedicated to the specification and implementation of spoken language systems for human-robot interaction (Crangle & Suppes 1994, Lauria et al 2002, Severinson-Eklund 2003, Kyriacou et al 2005, Mavrides & Roy 2006. The research described in the current chapter extends these approaches with a Spoken Language Programming system that allows a more detailed specification of conditional execution, and by using language as a compliment to vision-based action perception as a mechanism for indicating how things are to be done, in the context of cooperative, turn-taking behavior.…”
Section: Discussionmentioning
confidence: 99%
“…Part of the challenge is to define an intermediate layer of language-commanded robot actions that are well adapted to a class of HRI cooperation tasks. This is similar to the language-based task analysis in Lauria et al (2002). An essential part of the analysis we perform concerns examining a given task scenario and determining the set of action/command primitives that satisfy two requirements.…”
Section: Language and Meaningmentioning
confidence: 99%
See 1 more Smart Citation
“…We call this type of method the Direct Commanding Method (DCM), in which users directly send commands to robots in order to control them. Plenty of studies have used DCM, such as gestures [7,8,9], speech recognition [10,11,12], and control devices like joysticks [13,14,15]. Figure 1 shows the DCM interaction model.…”
Section: Direct Commanding Methodsmentioning
confidence: 99%
“…Under this requirement there have been several research issues currently active in the Robotics community. These issues include speaker localization [1] [5], speech separation and enhancement [2], speech recognition and natural dialog [3], and speaker identification and multi-modal interaction [4] etc. Among them, speaker localization using either biological hearing principle [5] or microphone array [1] has drawn lots of attentions for many years [6].…”
Section: Introductionmentioning
confidence: 99%