This paper presents an overview of the technologies for in-loop processing and filtering in the Versatile Video Coding (VVC) standard. These processes comprise luma mapping with chroma scaling, deblocking filter, sample adaptive offset, adaptive loop filter and cross-component adaptive loop filter. They are qualified as "in-loop" because they are applied inside the encoding and decoding loops, before storing the pictures in the decoded picture buffer. The filters are complementary and address different purposes. Luma mapping with chroma scaling aims at adaptively modifying the coded samples distribution for improved coding efficiency. The deblocking filter aims at reducing blocking discontinuities. Sample adaptive offset mostly aims at reducing artifacts resulting from the quantization of transform coefficients. Adaptive loop filter and cross-component adaptive loop filter are adaptive filters enabling to enhance the reconstructed signal, using for instance Wiener-filter encoding approaches. The paper provides an overview of the in-loop filtering process and a detailed description of the filtering algorithms. Objective compression efficiency results are provided for each filter, with indication of cumulative coding gains. Subjective benefits are illustrated. Implementation issues considered during the design of the VVC in-loop filters are also discussed.
Dynamic Adaptive Streaming over HTTP (DASH) is broadly deployed on the Internet for live and on-demand video streaming services. Recently, a new version of HTTP was proposed, named HTTP/2. One of the objectives of HTTP/2 is to improve the end-user perceived latency compared to HTTP/1.1. HTTP/2 introduces the possibility for the server to push resources to the client. This paper focuses on using the HTTP/2 protocol and the server push feature to reduce the start-up delay in a DASH streaming session. In addition, the paper proposes a new approach for video adaptation, which consists in estimating the bandwidth, using WebSocket (WS) over HTTP/2, and in making partial adaptation on the server side. Obtained results show that, using the server push feature and WebSocket layered over HTTP/2 allow faster loading time and faster convergence to the nominal state. Proposed solution is studied in the context of a direct client-server HTTP/2 connection. Intermediate caches are not considered in this study.
We propose a new hierarchical approach to resolution scalable lossless and near-lossless (NLS) compression. It combines the adaptability of DPCM schemes with new hierarchical oriented predictors to provide resolution scalability with better compression performances than the usual hierarchical interpolation predictor or the wavelet transform. Because the proposed hierarchical oriented prediction (HOP) is not really efficient on smooth images, we also introduce new predictors, which are dynamically optimized using a least-square criterion. Lossless compression results, which are obtained on a large-scale medical image database, are more than 4% better on CTs and 9% better on MRIs than resolution scalable JPEG-2000 (J2K) and close to nonscalable CALIC. The HOP algorithm is also well suited for NLS compression, providing an interesting rate-distortion tradeoff compared with JPEG-LS and equivalent or a better PSNR than J2K for a high bit rate on noisy (native) medical images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.