2014 IEEE/RSJ International Conference on Intelligent Robots and Systems 2014
DOI: 10.1109/iros.2014.6942637
|View full text |Cite
|
Sign up to set email alerts
|

Omnidirectional 3D reconstruction in augmented Manhattan worlds

Abstract: Abstract-This paper proposes a method for high-quality omnidirectional 3D reconstruction of augmented Manhattan worlds from catadioptric stereo video sequences. In contrast to existing works we do not rely on constructing virtual perspective views, but instead propose to optimize depth jointly in a unified omnidirectional space. Furthermore, we show that plane-based prior models can be applied even though planes in 3D do not project to planes in the omnidirectional domain. Towards this goal, we propose an omni… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
33
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 61 publications
(34 citation statements)
references
References 38 publications
1
33
0
Order By: Relevance
“…Our approach differentiates itself from existing solutions on various fronts as shown in Table I. We evaluate the performance of our proposed approach on various publicly-available datasets including the KITTI dataset [21], the Multi-FOV synthetic dataset [27] (pinhole, fisheye, and catadioptric lenses), an omnidirectionalcamera dataset [28], and on the Oxford Robotcar 1000km Dataset [29].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Our approach differentiates itself from existing solutions on various fronts as shown in Table I. We evaluate the performance of our proposed approach on various publicly-available datasets including the KITTI dataset [21], the Multi-FOV synthetic dataset [27] (pinhole, fisheye, and catadioptric lenses), an omnidirectionalcamera dataset [28], and on the Oxford Robotcar 1000km Dataset [29].…”
Section: Methodsmentioning
confidence: 99%
“…In our experiments, we found that while our proposed solution was sufficiently powerful to model different camera optics, it was significantly better at modeling pinhole lenses as compared to fisheye and Sensor fusion with learned ego-motion: On fusing our proposed VO method with intermittent GPS updates (every 150 frames, black circles), the pose-graph optimized ego-motion solution (in green) achieves sufficiently high accuracy relative to ground truth. We test on a variety of publicly-available datasets including (a) Multi-FOV synthetic dataset [27] (pinhole shown above), (b) an omnidirectional-camera dataset [28], (c) Oxford Robotcar 1000km Dataset [29] (2015-11-13-10-28-08) (d-h) KITTI dataset [21]. Weak supervision such as GPS measurements can be especially advantageous in recovering improved estimates for localization, while simultaneously minimizing uncertainties associated with pure VO-based approaches.…”
Section: B Varied Camera Opticsmentioning
confidence: 99%
“…This has gained significant popularity, due to it's robustness to minor calibration inaccuracies. Recent extensions of this idea include the addition of weighting terms [17], an iterative variant [18] and the augmented Manhattan world assumption [19,20].…”
Section: Bottom-up Reconstructionmentioning
confidence: 99%
“…In Table 5 we expand on this evaluation, by testing the contribution of every individual energy term from Equation (20). In all cases, removing an energy term causes an increase in the error, meaning that each term encodes useful information for 3D reconstruction, and none of the terms are redundant.…”
Section: Examination Of Subsystemsmentioning
confidence: 99%
“…Schoenbein et al [22] proposed a high-quality omnidirectional 3D reconstruction of Manhattan worlds from catadioptric stereo video cameras. However, these catadioptric omnidirectional cameras have a large number of systematic parameters including the camera and mirror calibration.…”
Section: Approximated Room Geometry Reconstructionmentioning
confidence: 99%