Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces 2017
DOI: 10.1145/3132272.3134144
|View full text |Cite
|
Sign up to set email alerts
|

Fast Lossless Depth Image Compression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 32 publications
(9 citation statements)
references
References 9 publications
0
9
0
Order By: Relevance
“…The approach saves more than 55% bitrate with a significant reduction in the coding complexity. Andrew D. Wilson [43] presented a lossless image compression method for 16-bit single channel images typical of Kinect depth cameras. The algorithm is faster than existing lossless techniques.…”
Section: A Edge-preserving Depth-map Codingmentioning
confidence: 99%
“…The approach saves more than 55% bitrate with a significant reduction in the coding complexity. Andrew D. Wilson [43] presented a lossless image compression method for 16-bit single channel images typical of Kinect depth cameras. The algorithm is faster than existing lossless techniques.…”
Section: A Edge-preserving Depth-map Codingmentioning
confidence: 99%
“…RoomAlive refined the calibration of multiple projectors and depth cameras as well as the rendering of interactive projection mapped experiences [9,17]. RoomAlive has been shown to scale to as many as eight cameras in a conference room [31]. Lindlbauer, et al, uses a voxel grid representation to encode annotations throughout the physical environment [12].…”
Section: Related Workmentioning
confidence: 99%
“…The depth stream was encoded using the RVL algorithm proposed by [33], a combination of run length encoding and variable length encoding compressing each frame independently using the process described in the paper.…”
Section: Visualizationmentioning
confidence: 99%
“…The normal and color streams are decoded normally as video streams (FFmpeg and NVENC were used in our case), updating pixel buffers at 30fps, which will be available in texture memory by the vertex shader. Depth data is decoded using the RVL algorithm [33] and updated into an array buffer which is aligned with UV coordinates in a vertex array passed onto the vertex shader.…”
Section: Visualizationmentioning
confidence: 99%