2003
DOI: 10.1007/3-540-36592-3_3
|View full text |Cite
|
Sign up to set email alerts
|

Integrating Context-Free and Context-Dependent Attentional Mechanisms for Gestural Object Reference

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
10
0

Year Published

2003
2003
2009
2009

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 16 publications
(10 citation statements)
references
References 14 publications
0
10
0
Order By: Relevance
“…This requires an accurate selection of the object and segmentation from the background to be adequate for the fast object learning component [14]. Although this can be achieved by only defining the correct 2D boundaries of the object view, the correct 3D position of the object is required when additional stationary cameras with pan tilt units focus the referenced object from different viewpoints or when several mobile users want to interact.…”
Section: Interaction For Object Learningmentioning
confidence: 99%
“…This requires an accurate selection of the object and segmentation from the background to be adequate for the fast object learning component [14]. Although this can be achieved by only defining the correct 2D boundaries of the object view, the correct 3D position of the object is required when additional stationary cameras with pan tilt units focus the referenced object from different viewpoints or when several mobile users want to interact.…”
Section: Interaction For Object Learningmentioning
confidence: 99%
“…On the one hand, a VPL classifier as introduced by Heidemann et al [24,25] is applied. This approach was motivated by biological information processing principles which are believed to underlie early visual processing in the brain.…”
Section: Object Recognition Information Fusion and Online Learningmentioning
confidence: 99%
“…It is based on a neural architecture called "VPL" described in detail in earlier work, see [12] and references therein. In short, the VPL-classifier consists of three processing stages: The first two layers perform feature extraction by means of local principal component analysis (PCA), which is implemented in the separate steps of clustering the raw input data by vector quantization ("Vlayer") and a subsequent local PCA ("P-layer").…”
Section: Trainable Object Recognitionmentioning
confidence: 99%
“…a cone-shaped activation in the direction of pointing, which highlights the nearest maximum of M . A detailed description of the underlying saliency algorithms, the integration to a single map M , and the selection of pointing targets can be found in [12].…”
Section: Attention Module and Object Referencementioning
confidence: 99%