2006
DOI: 10.1007/s10514-006-9015-6
|View full text |Cite
|
Sign up to set email alerts
|

Socially Distributed Perception: GRACE plays social tag at AAAI 2005

Abstract: This paper presents a robot search task (social tag) that uses social interaction, in the form of asking for help, as an integral component of task completion. Socially distributed perception is defined as a robot's ability to augment its limited sensory capacities through social interaction. We describe the task of social tag and its implementation on the robot GRACE for the AAAI 2005 Mobile Robot Competition & Exhibition. We then discuss our observations and analyses of GRACE's performance as a situated inte… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2008
2008
2022
2022

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 35 publications
(17 citation statements)
references
References 14 publications
0
17
0
Order By: Relevance
“…We correlated our results to existing work that maps data points on the arousalvalence space to affective adjectives [18], as a means of generating loose-yetinformative keywords to roughly describe how various tail configurations may be perceived. We took the average rating for each motion and correlated it with the closest point on the previous work.…”
Section: Correlating Tail Motions To Affective Adjectivesmentioning
confidence: 99%
See 1 more Smart Citation
“…We correlated our results to existing work that maps data points on the arousalvalence space to affective adjectives [18], as a means of generating loose-yetinformative keywords to roughly describe how various tail configurations may be perceived. We took the average rating for each motion and correlated it with the closest point on the previous work.…”
Section: Correlating Tail Motions To Affective Adjectivesmentioning
confidence: 99%
“…Some have suggested the use of facial expressions and embodied gestures, where examples include mechanized faces with eyebrows, mouths, etc. [1][2][3]34], animated faces on screens [16,18], using mixed reality to superimpose graphics faces on robots [31], humanlike whole gestures with arms, etc. [2], or even using gaze [29].…”
Section: Related Workmentioning
confidence: 99%
“…For one, robotic systems that use natural language for robot instruction either do not have natural language fully integrated into the robotic architecture (for example, Michalowski et al [2007]) or are limited to simple instructions (for example, Firby [1989], Atrash et al [2009]). And many systems use rule-based grammars due to the difficulty of producing a well-trained grammar on what is invariably the small amount of data applicable to the domain.…”
Section: Previous Workmentioning
confidence: 99%
“…Most approaches to natural language understanding on robots (e.g., [1]- [4]) are sequential, make limited use of context, task knowledge, and goal structures, and ignore physical aspects of language users. In contrast, converging evidence from psycholinguistics suggests that human language understanding is incremental and parallel, depends on the speaker's and listener's contexts, utilizes task and goal knowledge, and involves the perceptions and perspectives of situated, embodied agents.…”
Section: Introductionmentioning
confidence: 99%