2019
DOI: 10.1177/0278364919850296
|View full text |Cite
|
Sign up to set email alerts
|

Sparse depth sensing for resource-constrained robots

Abstract: We consider the case in which a robot has to navigate in an unknown environment but does not have enough on-board power or payload to carry a traditional depth sensor (e.g., a 3D lidar) and thus can only acquire a few (point-wise) depth measurements. We address the following question: is it possible to reconstruct the geometry of an unknown environment using sparse and incomplete depth measurements? Reconstruction from incomplete data is not possible in general, but when the robot operates in man-made environm… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
14
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 13 publications
(14 citation statements)
references
References 97 publications
0
14
0
Order By: Relevance
“…Depth completion. Depth completion is an umbrella term that covers a collection of related problems with a variety of different input modalities (e.g., relatively dense depth input [5,6,7] vs. sparse depth measurements [8,9]; with color images for guidance [6,10] vs. without [4]). The problems and solutions are usually sensor-dependent, and as a result they face vastly different levels of algorithmic challenges.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Depth completion. Depth completion is an umbrella term that covers a collection of related problems with a variety of different input modalities (e.g., relatively dense depth input [5,6,7] vs. sparse depth measurements [8,9]; with color images for guidance [6,10] vs. without [4]). The problems and solutions are usually sensor-dependent, and as a result they face vastly different levels of algorithmic challenges.…”
Section: Related Workmentioning
confidence: 99%
“…However, the completion problem becomes much more challenging when the input depth image has much lower density, because the inverse problem is ill-posed. For instance, Ma et al [8,9] addressed depth reconstruction from only hundreds of depth measurements, by assuming a strong a priori of piecewise linearity in depth signals. Another example is autonomous driving with 3D LiDARs, where the projected depth measurements on the camera image space account for roughly 4% pixels [4].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Plane Coefficients Prediction: AutoPCD then approximates the PCD into several 3D planes and learn the plane coefficients. The idea shares a similar spirit to a recently proposed work on sparse depth reconstruction [4]. However, different from the existing approach using compressed sensing and geometrical modeling, AutoPCD leverages machine-learning to learn the coefficients.…”
Section: Autopcd Designmentioning
confidence: 97%
“…To make the feature extraction robust, AutoPCD transforms the original points from Cartesian coordinates into learned planes' coordinates: The Encoder module leverages the plane coefficients, predicted by the Plane Predictor, and multiply them with Cartesian point coordinates to achieve the transformation.Plane Coefficients Prediction: AutoPCD then approximates the PCD into several 3D planes and learn the plane coefficients. The idea shares a similar spirit to a recently proposed work on sparse depth reconstruction[4]. However, different from the existing approach using compressed sensing and geometrical modeling, AutoPCD leverages machine-learning to learn the coefficients.…”
mentioning
confidence: 97%