2009 IEEE International Conference on Robotics and Automation 2009
DOI: 10.1109/robot.2009.5152855
|View full text |Cite
|
Sign up to set email alerts
|

Learning 3-D object orientation from images

Abstract: We propose a learning algorithm for estimating the 3-D orientation of objects. Orientation learning is a difficult problem because the space of orientations is non-Euclidean, and in some cases (such as quaternions) the representation is ambiguous, in that multiple representations exist for the same physical orientation. Learning is further complicated by the fact that most man-made objects exhibit symmetry, so that there are multiple "correct" orientations. In this paper, we propose a new representation for or… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
46
0

Year Published

2011
2011
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 79 publications
(46 citation statements)
references
References 31 publications
(31 reference statements)
0
46
0
Order By: Relevance
“…Grassia et al [16] pointed out that Euler angles and quaternions are not suitable for orientation differentiation and integration operations and proposed exponential map as a more robust rotation representation. Saxena et al [28] observed that the Euler angles and quaternions cause learning problems due to discontinuities. However, they did not propose general rotation representations other than direct regression of 3x3 matrices, since they focus on learning representations for objects with specific symmetries.…”
Section: Related Workmentioning
confidence: 99%
“…Grassia et al [16] pointed out that Euler angles and quaternions are not suitable for orientation differentiation and integration operations and proposed exponential map as a more robust rotation representation. Saxena et al [28] observed that the Euler angles and quaternions cause learning problems due to discontinuities. However, they did not propose general rotation representations other than direct regression of 3x3 matrices, since they focus on learning representations for objects with specific symmetries.…”
Section: Related Workmentioning
confidence: 99%
“…Saxena et al [1,2,23] showed that a 'grasping point' could be estimated from the image using supervised learning algorithms, and that this method generalized to a large number of novel objects. However, their grasping point representation only indicated where to grasp, and other parameters such as gripper orientation were left to be estimated by other methods (e.g., [24]). In later works, Saxena et al [25] also used point clouds for grasping.…”
Section: Introductionmentioning
confidence: 99%
“…For 10 joints, we convert each rotation matrix to half-space quaternions in order to more compactly represent the joint's orientation. (A more compact representation would be to use Euler angles, but they suffer from representation problem called gimbal lock [31].) Along with these joint orientations, we would like to know whether person is standing or sitting, and whether or not person is leaning over.…”
Section: A Featuresmentioning
confidence: 99%