2020
DOI: 10.3389/fnbot.2020.00045
|View full text |Cite
|
Sign up to set email alerts
|

Geometric Affordance Perception: Leveraging Deep 3D Saliency With the Interaction Tensor

Abstract: Agents that need to act on their surroundings can significantly benefit from the perception of their interaction possibilities or affordances. In this paper we combine the benefits of the Interaction Tensor, a straight-forward geometrical representation that captures multiple object-scene interactions, with deep learning saliency for fast parsing of affordances in the environment. Our approach works with visually perceived 3D pointclouds and enables to query a 3D scene for locations that support affordances su… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 74 publications
(95 reference statements)
0
6
0
Order By: Relevance
“…In further studies, Ruiz and Mayol-Cuevas (2020) developed a geometric interaction descriptor for non-articulated, rigid object shapes. Given a 3D environment, the method demonstrated good generalization on detecting physically feasible object–environment configurations.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…In further studies, Ruiz and Mayol-Cuevas (2020) developed a geometric interaction descriptor for non-articulated, rigid object shapes. Given a 3D environment, the method demonstrated good generalization on detecting physically feasible object–environment configurations.…”
Section: Related Workmentioning
confidence: 99%
“…We are inspired by recent methods that have revisited geometric features, such as the bisector surface for scene–object indexing ( Zhao et al., 2014 ) and affordance detection ( Ruiz and Mayol-Cuevas, 2020 ). Initiating from a spatial representation makes sense if it helps reduce data training needs and simplify explanations—as long as it can outperform data-intensive approaches.…”
Section: Arosmentioning
confidence: 99%
See 1 more Smart Citation
“…[9], [10], [14], [16], [17] use convolutional neural networks (CNNs) to detect regions of affordance in an image. Ruiz and Mayol-Cuevas [11] predict affordance candidate locations in environments via the interaction tensor. In contrast to detecting different types of affordances, we focus on understanding the sitting affordance and use it for real-robot experiment.…”
Section: Related Workmentioning
confidence: 99%
“…Note that, our distance-based interaction representation explicitly captures the contact and the proximal relationships between the human body and the scene and can be regarded as the scene 'affordance'. Compared with other person-scene interaction representations [7,30,31,32,34], ours is purely geometry-based, is more efficient than dense affordance maps, and requires neither scene semantics nor action type labeling, nor affordance class annotation.…”
Section: Introductionmentioning
confidence: 99%