2015
DOI: 10.5937/fmet1501047k
|View full text |Cite
|
Sign up to set email alerts
|

Calibration of Kinect-type RGB-D sensors for robotic applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 19 publications
(6 citation statements)
references
References 13 publications
(19 reference statements)
0
6
0
Order By: Relevance
“…The camera intrinsic parameters used in obtaining the normalized coordinates of the image in (9) were taken from the literature [45]. Specifically, α u = 834.01, α v = 839.85, u 0 = 305.51 and v 0 = 240.09.…”
Section: Resultsmentioning
confidence: 99%
“…The camera intrinsic parameters used in obtaining the normalized coordinates of the image in (9) were taken from the literature [45]. Specifically, α u = 834.01, α v = 839.85, u 0 = 305.51 and v 0 = 240.09.…”
Section: Resultsmentioning
confidence: 99%
“…The systematic errors for each pixel are calculated based on the difference between the depths measured with the actual distance (distance the table to ToF camera). Then those distance offsets is stored as a look-up table depending on the three distances, for each pixel based on the intervals and intensity [39].…”
Section: Methodsmentioning
confidence: 99%
“…Unlike the grayscale depth map, the normal map is described by RGB values and it does not carry distance information, rather information regarding the positions of the normal's vectors of the surface. (An example of the correlation between the 3D object coordinates, the coordinates in RGB and the depth maps can be found in [2]). In the rendering process, the position of normals is substantial.…”
Section: Generation Of Normal Map Based On the Depth Mapmentioning
confidence: 99%
“…It has come a long way from mere photo-editing in programs for such a purpose using various artistic filters. There are several applications that process images resembling different artistic styles, for example [1][2][3].…”
Section: Introductionmentioning
confidence: 99%