2022
DOI: 10.1007/978-3-031-19827-4_12
|View full text |Cite
|
Sign up to set email alerts
|

MODE: Multi-view Omnidirectional Depth Estimation with 360$$^\circ $$ Cameras

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 34 publications
0
2
0
Order By: Relevance
“…CSDNet [25] presents an approach for panoramic depth estimation using spherical convolutions and epipolar constraints. MODE [7] uses Cassini [26] projection for spherical stereo matching.…”
Section: Omnidirectional Depth Estimationmentioning
confidence: 99%
See 2 more Smart Citations
“…CSDNet [25] presents an approach for panoramic depth estimation using spherical convolutions and epipolar constraints. MODE [7] uses Cassini [26] projection for spherical stereo matching.…”
Section: Omnidirectional Depth Estimationmentioning
confidence: 99%
“…. , S L , we can obtain the ERP features at each index based on Equations ( 4), ( 6) and (7). By concatenating these features together, we form the spherical feature volume for the i th camera, which is denoted as…”
Section: Spherical Cost Volumementioning
confidence: 99%
See 1 more Smart Citation
“…The authors proposed a model combining geometric structure estimation with depth estimation based on the prior that geometric structure is associated with depth information [17]. Li et al [18] proposed a multi-view panoramic depth estimation network, which first estimates depth maps from different camera pairs by stereo matching and then fuses the depth maps to achieve robustness to mud spots, water droplets on camera lenses, and glare caused by bright light. Jin et al [17] put forward a multitask model combining geometric structure estimation and depth estimation from the prior that geometric structure is associated with depth information, and both can assist each other in optimization.…”
Section: Related Workmentioning
confidence: 99%
“…Li et al. [18] proposed a multi‐view panoramic depth estimation network, which first estimates depth maps from different camera pairs by stereo matching and then fuses the depth maps to achieve robustness to mud spots, water droplets on camera lenses, and glare caused by bright light. Jin et al.…”
Section: Related Workmentioning
confidence: 99%