2016
DOI: 10.7305/automatika.2017.02.1758
|View full text |Cite
|
Sign up to set email alerts
|

Multiple Kinect V2 Calibration

Abstract: In this paper, we propose a method to easily calibrate multiple Kinect V2 sensors. It requires the cameras to simultaneously observe a 1D object shown at different orientations (three at least) or a 2D object for at least one acquisition. This is possible due to the built-in coordinate mapping capabilities of the Kinect. Our method follows five steps: image acquisition, pre-calibration, point cloud matching, intrinsic parameters initialization, and final calibration. We modeled radial and distortion parameters… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 31 publications
0
3
0
Order By: Relevance
“…To ensure no joint occlusion, the test subject is required to stand with straight legs and both arms fully extended, pointing sideways in a T-Pose for less than two seconds, during which 50 frames are acquired by both Kinect sensors. Then, the joint position estimates are averaged and fed into the calibration algorithm, which is based on an approach similar to the multiple Kinect Calibration described by Córdova-Esparza et al [15]. The coordinate transformation is calculated via Corresponding Point Set Registration [16].…”
Section: Coordinate Transformationmentioning
confidence: 99%
“…To ensure no joint occlusion, the test subject is required to stand with straight legs and both arms fully extended, pointing sideways in a T-Pose for less than two seconds, during which 50 frames are acquired by both Kinect sensors. Then, the joint position estimates are averaged and fed into the calibration algorithm, which is based on an approach similar to the multiple Kinect Calibration described by Córdova-Esparza et al [15]. The coordinate transformation is calculated via Corresponding Point Set Registration [16].…”
Section: Coordinate Transformationmentioning
confidence: 99%
“…The augmented view is useful in many applications such as three‐dimensional (3D) reconstruction [1–7], video surveillance [8], robot navigation [9], and quality control [10] among others. These applications usually require the extraction of metric information from the environment, which can only be achieved with a calibrated system.…”
Section: Introductionmentioning
confidence: 99%
“…The use of the depth sensor in an outdoor environment required its calibration [93,94]. Therefore, we performed the sensor calibration in an indoor environment controlling the illumination received by the pattern, mainly in the back of the pattern and the front of the Kinect sensor.…”
mentioning
confidence: 99%