2013 IEEE Workshop on Robot Vision (WORV) 2013
DOI: 10.1109/worv.2013.6521916
|View full text |Cite
|
Sign up to set email alerts
|

Why would i want a gyroscope on my RGB-D sensor?

Abstract: Many RGB-D sensors, e.g. the Microsoft Kinect, use rolling shutter cameras. Such cameras produce geometrically distorted images when the sensor is moving. To mitigate these rolling shutter distortions we propose a method that uses an attached gyroscope to rectify the depth scans. We also present a simple scheme to calibrate the relative pose and time synchronization between the gyro and a rolling shutter RGB-D sensor.We examine the effectiveness of our rectification scheme by coupling it with the the Kinect Fu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 16 publications
0
7
0
Order By: Relevance
“…This disagreement could be explained by the rolling shutter distortion of CMOS image sensors when capturing images of moving objects. [46,47] When the frequency of harmonic motion increased (i.e., average speed increased), severer image distortion diminished accuracy of horizontal (x-axis) displacement measurement. But for the normalized peak errors analysis, the estimation only considered the upper peaks and the lower peaks of amplitudes, where the instant velocity of the target plate was zero; hence, there was no rolling shutter distortion in the images.…”
Section: Discussionmentioning
confidence: 99%
“…This disagreement could be explained by the rolling shutter distortion of CMOS image sensors when capturing images of moving objects. [46,47] When the frequency of harmonic motion increased (i.e., average speed increased), severer image distortion diminished accuracy of horizontal (x-axis) displacement measurement. But for the normalized peak errors analysis, the estimation only considered the upper peaks and the lower peaks of amplitudes, where the instant velocity of the target plate was zero; hence, there was no rolling shutter distortion in the images.…”
Section: Discussionmentioning
confidence: 99%
“…Initialisation of the relative pose and time synchronisation is also done in [14], but in a semi-automated fashion. Here a gyro sensor is attached to a Kinect sensor and used to rectify its depth maps.…”
Section: A Related Workmentioning
confidence: 99%
“…Another option is to integrate the relative orientations and then use spatiotemporal ICP for alignment [26]. In the rolling shutter case, relative camera orientations are non-trivial to find, and a way to avoid estimating them is to instead use the optical flow magnitude, as proposed in [14]. We improve on this here, by adding a coarse to fine search, which speeds up the search by orders of magnitude.…”
Section: B Time Offsetmentioning
confidence: 99%
See 1 more Smart Citation
“…We create meshes from both raw and rectified depth scans, and these are then compared to a ground truth mesh. The types of motion we investigate are: pan, tilt and wobble (shaking) motions.As our method relies on gyroscope readings, the amount of computations required is negligible compared to the cost of running Kinect Fusion.This chapter is an extension of a paper at the IEEE Workshop on Robot Vision [10]. Compared to that paper, we have improved the rectification to also correct for lens distortion, and use a coarse-to-fine search to find the time shift more quicky.…”
mentioning
confidence: 99%