2013 IEEE International Conference on Robotics and Automation 2013
DOI: 10.1109/icra.2013.6630856
|View full text |Cite
|
Sign up to set email alerts
|

Pose estimation using local structure-specific shape and appearance context

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
49
0
1

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 65 publications
(50 citation statements)
references
References 29 publications
0
49
0
1
Order By: Relevance
“…Most of the processing steps use generally applicable procedures available in open-source libraries and software, including registration, cleaning, and meshing of the point clouds (Cignoni et al, 2008;Rusu and Cousins, 2011;Buch et al, 2013;Kazhdan and Hoppe, 2013). General solutions for the segmentation of features like leaves and stems from plants, however, remain less developed, especially for 3D plant representations (Paproki et al, 2012;Paulus et al, 2013;Xia et al, 2015).…”
Section: D Sorghum Reconstructions From Depth Imagesmentioning
confidence: 99%
“…Most of the processing steps use generally applicable procedures available in open-source libraries and software, including registration, cleaning, and meshing of the point clouds (Cignoni et al, 2008;Rusu and Cousins, 2011;Buch et al, 2013;Kazhdan and Hoppe, 2013). General solutions for the segmentation of features like leaves and stems from plants, however, remain less developed, especially for 3D plant representations (Paproki et al, 2012;Paulus et al, 2013;Xia et al, 2015).…”
Section: D Sorghum Reconstructions From Depth Imagesmentioning
confidence: 99%
“…We used two complementary representations of objects in these tests, namely the texlet-based context descriptors first presented in [9] as well as the line segment-based context descriptors presented in [7]. Note that for the line segments, the absolute number of features computed in an object view is fairly low.…”
Section: Methodsmentioning
confidence: 99%
“…Then, keypoints were extracted and the local feature descriptor of each keypoint was calculated for matching. Keypoint correspondence was obtained by matching the features, and false matchings were discarded by considering geometric constraints like the one in the study by Buch et al [11]. Once the correspondence of the keypoints between the two point clouds was obtained, a coarse transformation between them was calculated.…”
Section: Methodsmentioning
confidence: 99%
“…A correspondence grouping method was implemented, as described in the study by Buch et al [11], to discard false correspondences based on geometric consistency. Figure 2(b,c) show the results of keypoint matching before and after discarding false correspondences in the registration of a head phantom.…”
Section: Matching and Discarding False Correspondencesmentioning
confidence: 99%
See 1 more Smart Citation