2022
DOI: 10.1142/s0219467824500013
|View full text |Cite
|
Sign up to set email alerts
|

3D Vision Using Multiple Structured Light-Based Kinect Depth Cameras

Abstract: Real-time 3D scanning of a scene or object using multiple depth cameras is often required in many applications but is still a challenging task for the computer vision community, especially when the object or scene is partially occluded and dynamic. If active depth sensors are used in this case, their resulting depth map quality gets degraded due to interference between active radiations from each depth sensor. Passive 3D sensors like stereo cameras can avoid the issue of interference as they do not emit any ac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 43 publications
0
5
0
Order By: Relevance
“…The infrared projector emits a speckle pattern of infrared dots into the camera's field of view. As a result of Kinect's depth sensor technology, the Kinect sensor can create 3D maps of objects within its range by measuring changes to the reference speckle pattern (Kamble & Mahajan, 2022). A Kinect sensor can capture color and depth images simultaneously at a frame rate of 30 frames per second.…”
Section: Methodsmentioning
confidence: 99%
“…The infrared projector emits a speckle pattern of infrared dots into the camera's field of view. As a result of Kinect's depth sensor technology, the Kinect sensor can create 3D maps of objects within its range by measuring changes to the reference speckle pattern (Kamble & Mahajan, 2022). A Kinect sensor can capture color and depth images simultaneously at a frame rate of 30 frames per second.…”
Section: Methodsmentioning
confidence: 99%
“…where Γ is known as the growth rate of the estimation law. Using (20), the instruction of ϕ and φ, and φ → 0, we may conclude that Π → A M , and…”
Section: ) Assumptionmentioning
confidence: 99%
“…We set the initial learning rate to 0.0005 which is reduced in every 15 epochs by multiplying 0.1 to the learning rate. In addition, we discretize the parameter estimation formula in (20) by considering φ = φ(k+1)− φ(k) ∆k , k ∈ Z ≥0 ; then, knowing that ∆k = 1 as k is a sequentially increasing index (the index of intervals in the registration process), the estimation rule (20) turns into…”
Section: Adgc-lstm Parameters' Configurationmentioning
confidence: 99%
See 1 more Smart Citation
“…The general process can be broken down into the following steps: feature points extraction, image matching, dense point cloud achievement, surface reconstruction and textured mapping [11]. Nevertheless, vision-based systems are limited to reconstructing an environment with low illumination and sparse or repetitive features, due to the problem of calculating depth by finding correspondence between two images [12].…”
Section: Introductionmentioning
confidence: 99%