2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00900
|View full text |Cite
|
Sign up to set email alerts
|

Mask-ToF: Learning Microlens Masks for Flying Pixel Correction in Time-of-Flight Imaging

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 58 publications
0
5
0
Order By: Relevance
“…Recently, depth cameras based on direct ToF -marketed as LiDAR scanners [78] -have been released on a few Apple smartphones, such as the Pro versions of iPhones 12 and 13 [4]. Direct ToF cameras use timed light pulses to gather depth measurements in a scene, instead of illuminating the whole scene at once with modulated light as with indirect ToF cameras [13]. Thus, the raw depth maps obtained from direct ToF cameras are sparse [31,48], and the task of depth completion is more suitable to completing sparse depth maps (see Section 2).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, depth cameras based on direct ToF -marketed as LiDAR scanners [78] -have been released on a few Apple smartphones, such as the Pro versions of iPhones 12 and 13 [4]. Direct ToF cameras use timed light pulses to gather depth measurements in a scene, instead of illuminating the whole scene at once with modulated light as with indirect ToF cameras [13]. Thus, the raw depth maps obtained from direct ToF cameras are sparse [31,48], and the task of depth completion is more suitable to completing sparse depth maps (see Section 2).…”
Section: Discussionmentioning
confidence: 99%
“…In the future, we will incorporate our depth inpainting technique into a full AR system with 6DoF tracking, and evaluate it with mobile AR users. Dynamic scenarios will also allow us to study the possible impact of motion blur in RGB images and flying-pixel artifacts in ToF depth images [13,98]. Incorporating InDepth into ARCore is an open challenge, however: although raw depth images can be acquired in ARCore [25], the Depth API does not currently support passing processed depth images back into the AR mapping and tracking algorithm.…”
Section: Discussionmentioning
confidence: 99%
“…an object, at which the depth abruptly changes. This may lead to an incorrect estimation of the depth in the incident pixel [76], [77]. To evaluate this phenomenon, we make use of the 1 × 1 m white panel.…”
Section: F Key Parameters For a Tof Cameramentioning
confidence: 99%
“…Conventional imaging systems are typically designed in a sequential approach, where lens and sensors are hand-engineered based on specific metrics, such as PSF spot size or dynamic range, independent of the downstream camera task. Departing from this conventional design approach, a large body of work in computational imaging has explored jointly optimizing the optics and reconstruction algorithms, with successful applications in color image restoration [Chakrabarti 2016;, microscopy [Horstmeyer et al 2017;Kellman et al 2019;Nehme et al 2020;Shechtman et al 2016], monocular depth imaging [Chang and Wetzstein 2019;Haim et al 2018;He et al 2018;, super-resolution and extended depth of field [Sitzmann et al 2018;Sun et al 2021], time-of-flight imaging [Chugunov et al 2021;Marco et al 2017;, high-dynamic range imaging [Metzler et al 2019;Sun et al 2020], active-stereo imaging [Baek and Heide 2021], hyperspectral imaging , and other computer vision tasks [Tseng et al 2021b].…”
Section: Related Workmentioning
confidence: 99%