5th IEEE-RAS International Conference on Humanoid Robots, 2005.
DOI: 10.1109/ichr.2005.1573597
|View full text |Cite
|
Sign up to set email alerts
|

Distributed visual attention on a humanoid robot

Abstract: Complex visual processes such as visual attention are often computationally too expensive to allow real-time implementation on a single computer. To solve this problem we study distributed computer architectures that enable us to divide complex tasks into several smaller problems. In this paper we demonstrate how to implement distributed visual attention system on a humanoid robot to achieve real-time operation at relatively high resolutions and frame rates. We start from a popular theory of bottom-up visual a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 38 publications
(25 citation statements)
references
References 12 publications
0
23
0
Order By: Relevance
“…The overt attention models described in [51] and [71]- [84] are designed to implement in the robots/robotic heads as a component of their cognition. Among them, the models in [51], [71], [72], and [74] adopt different variants of the covert model NVT [13] to identify the visually salient/task-relevant stimuli and introduced different measures to deal with the research issues involved with robotic overt attention.…”
Section: A Overt Attention Modelsmentioning
confidence: 99%
“…The overt attention models described in [51] and [71]- [84] are designed to implement in the robots/robotic heads as a component of their cognition. Among them, the models in [51], [71], [72], and [74] adopt different variants of the covert model NVT [13] to identify the visually salient/task-relevant stimuli and introduced different measures to deal with the research issues involved with robotic overt attention.…”
Section: A Overt Attention Modelsmentioning
confidence: 99%
“…Ude et al (2005) demonstrated that with proper parallel processing in a distributed implementation, sufficient speeds were achieved to steer the visual system of the humanoid robot they used in realtime. The model of Ude et al (2005) was further developed in (Morén et al, 2008) by strengthening the top-down aspects and exploring a new way of integrating bottom-up and top-down mechanisms.…”
Section: Wwwintechopencommentioning
confidence: 99%
“…Ude et al (2005) demonstrated that with proper parallel processing in a distributed implementation, sufficient speeds were achieved to steer the visual system of the humanoid robot they used in realtime. The model of Ude et al (2005) was further developed in (Morén et al, 2008) by strengthening the top-down aspects and exploring a new way of integrating bottom-up and top-down mechanisms. The authors combined the use of saliency maps from Itti and Koch's (Itti et al, 1998) model with a more flexible version of the feature-specific top-down mechanism of Cave's FeatureGate (Cave, 1999).…”
Section: Wwwintechopencommentioning
confidence: 99%
“…Their model can perform within 280ms at Pentium-4 2.8GHz with 512MB RAM on an input image of 160 x 120 pixels. In the same year a distributed visual attention on a humanoid robot is proposed in (Ude et al, 2005). In this system five different modalities including colour, intensity, edges, stereo and motion are used.…”
Section: Related Workmentioning
confidence: 99%