2016 16th International Conference on Control, Automation and Systems (ICCAS) 2016
DOI: 10.1109/iccas.2016.7832307
|View full text |Cite
|
Sign up to set email alerts
|

3D reconstruction of structures using spherical cameras with small motion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(10 citation statements)
references
References 15 publications
0
10
0
Order By: Relevance
“…There are many optical low algorithms designed to work with perspective imagery (refer to [180] for a recent survey). Similarly to sparse feature matching, some works [39,120,172] adopt traditional optical low approaches for 360 • image processing, sometimes in between pre-and post-processing steps. The most straightforward workaround found in literature is to circularly pad ERP images before computing the optical low, avoiding longitudinal disconnection [39,172] (recall that a similar strategy was adopted in the context of deep learning for depth or layout estimation, as mentioned in Section 3).…”
Section: Dense Feature Extraction and Matchingmentioning
confidence: 99%
See 1 more Smart Citation
“…There are many optical low algorithms designed to work with perspective imagery (refer to [180] for a recent survey). Similarly to sparse feature matching, some works [39,120,172] adopt traditional optical low approaches for 360 • image processing, sometimes in between pre-and post-processing steps. The most straightforward workaround found in literature is to circularly pad ERP images before computing the optical low, avoiding longitudinal disconnection [39,172] (recall that a similar strategy was adopted in the context of deep learning for depth or layout estimation, as mentioned in Section 3).…”
Section: Dense Feature Extraction and Matchingmentioning
confidence: 99%
“…The authors also present hints on how additional cameras can further improve the results. Pathak et al [120,123] also allow small baseline and arbitrary rotation. They irst estimate the 5-DoF pose between the images using A-KAZE features and RANSAC-enabled 8-PA [71], and then łderotatež the images.…”
Section: Depth Estimationmentioning
confidence: 99%
“…With the entire field of view, 360 • camera is more robust to rotation and translation movements compared to normal perspective camera [22]. There are some recent methods aim to tackle 360 • perception tasks.…”
Section: Related Workmentioning
confidence: 99%
“…machine vision applications, including scene reconstruction [8,23,27,30,31,33,34,36,44], view synthesis [20,26], navigation [42,43,46], and tracking [3,21].…”
Section: Introductionmentioning
confidence: 99%