2019
DOI: 10.1145/3359618
|View full text |Cite
|
Sign up to set email alerts
|

Robots Learning to Say “No”

Abstract: "No" is one of the first ten words used by children and embodies the first form of linguistic negation. Despite its early occurrence, the details of its acquisition remain largely unknown. The circumstance that "no" cannot be construed as a label for perceptible objects or events puts it outside the scope of most modern accounts of language acquisition. Moreover, most symbol grounding architectures will struggle to ground the word due to its non-referential character. The presented work extends symbol groundin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 13 publications
(3 citation statements)
references
References 35 publications
0
2
0
Order By: Relevance
“…In support of this view are the following transcripts originating from the negation acquisition studies conducted by Förster, Saunders, Lehmann, and Nehaniv (2019). These studies consisted of multiple sessions per participant, and the transcripts pertains both to participant P12 (P) teaching object labels to Deechee (D), a childlike humanoid robot that was presented to participants as a young language learner.…”
Section: Introductionmentioning
confidence: 99%
“…In support of this view are the following transcripts originating from the negation acquisition studies conducted by Förster, Saunders, Lehmann, and Nehaniv (2019). These studies consisted of multiple sessions per participant, and the transcripts pertains both to participant P12 (P) teaching object labels to Deechee (D), a childlike humanoid robot that was presented to participants as a young language learner.…”
Section: Introductionmentioning
confidence: 99%
“…The process of defining the types of errors could also help us to understand why they arise, measure their impact and explore possibilities and appropriate ways to detect, mitigate and recover from them. If, for example, artificial agents and human users are mismatched conversational partners as suggested by Moore (2007) and Förster et al (2019), and if this mismatch creates constraints and a "habitability gap" in HRI (Moore, 2017), are their specific types of failures that only occur due to such asymmetric setups? And, if yes, what does that mean for potential error management in HRI?…”
Section: Wanted: a Taxonomy Of Conversational Failures In Hrimentioning
confidence: 99%
“…The potential of task-related aspects of negation in the context of explaining (the robot uses it for contrasting) in HRI has not been explored so far. Studies often focused on narrow aspects of negation, such as affect or volition, as context conditions ( Förster et al, 2019 ). Although recently the focus shifted towards explainable robots with some progress in the direction of explaining why robots reject commands of a human ( Scheutz et al, 2022 ).…”
Section: Introductionmentioning
confidence: 99%