2014
DOI: 10.1016/j.cviu.2014.04.012
|View full text |Cite
|
Sign up to set email alerts
|

Multiview feature distributions for object detection and continuous pose estimation

Abstract: This paper presents a multiview model of object categories, generally applicable to virtually any type of image features, and methods to efficiently perform, in a unified manner, detection, localization and continuous pose estimation in novel scenes. We represent appearance as distributions of low-level, fine-grained image features. Multiview models encode the appearance of objects at discrete viewpoints, and, in addition, how these viewpoints deform into one another as the viewpoint continuously varies (as de… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 21 publications
(16 citation statements)
references
References 64 publications
(88 reference statements)
0
16
0
Order By: Relevance
“…We use our probabilistic appearance-based pose estimation (PAPE) method (Erkent et al, 2016 ; Shukla et al, 2016 ) to detect hand gestures. It is based on probabilistic representations of objects and scenes by Teney and Piater ( 2014 ).…”
Section: Human-robot Collaboration Scenariomentioning
confidence: 99%
“…We use our probabilistic appearance-based pose estimation (PAPE) method (Erkent et al, 2016 ; Shukla et al, 2016 ) to detect hand gestures. It is based on probabilistic representations of objects and scenes by Teney and Piater ( 2014 ).…”
Section: Human-robot Collaboration Scenariomentioning
confidence: 99%
“…In [1], a complex object-viewpoint manifold is built and then untangled by factorizing the manifold in a view-invariant category representation and a category-invariant viewpoint representation, where the latter is used for pose estimation. In [27], the class representation is a probability distribution depending on the image and pose coordinates of extracted edge features. The object pose in the query image is estimated by marginalizing out the product of the query distribution and the class distribution with respect to the spatial coordinates.…”
Section: Related Workmentioning
confidence: 99%
“…Glasner et al [11] // 24.8 Pepik et al [21] // 4.7 Ozuysal et al [20] 46.48 // Redondo et al [24] 39.8 7 Teney et al [27] 34.7 5.2 Torki et al [28] tional experiment on the EPFL dataset. Instead of the term described in Equation ( 12), we replace it with a uniform distribution over [0 • , 360 • ).…”
Section: Mean Ae [ • ] Median Ae [ • ]mentioning
confidence: 99%
“…Both for single view (Shiu and Huang 1991b, Ferri et al 1993, Puech et al 1997, Penman and Alwesh 2006, Doignon and De Mathelin 2007, Liu and Hu 2014 approaches or multi-view ones (Houqin and Jianbo 2008, Becke 2015, Teney and Piater 2014, Becke and Schlegl 2015, Zhu et al 2015, Zhang et al 2017, cylinder pose estimation can be estimated based on several approaches or on a combination of them. Depending on the image data taken into consideration, only cylinder orientation in three degrees of freedom (dof) or five dof pose can be established employing the contour data corresponding to cylinder's circular borders.…”
Section: Introductionmentioning
confidence: 99%