High-throughput sequencing (HTS) data are commonly stored as raw sequencing reads in FASTQ format or as reads mapped to a reference, in SAM format, both with large memory footprints. Worldwide growth of HTS data has prompted the development of compression methods that aim to significantly reduce HTS data size. Here we report on a benchmarking study of available compression methods on a comprehensive set of HTS data using an automated framework.
Dynamic Adaptive Streaming over HTTP (DASH) is referred to as a multimedia streaming standard to deliver high quality multimedia content over the Internet using conventional HTTP Web servers. As a fundamental feature, it enables automatic switching of quality levels according to network conditions, user requirements, and expectations. Currently, the proposed adaptation schemes for HTTP streaming mostly rely on throughput measurements and/or buffer-related metrics, such as buffer exhaustion and levels. In this paper, we propose to enhance the DASH adaptation logic by feeding it with additional information from our evaluation of the users' perception approximating the userperceived quality of video playback. The proposed model aims at conveniently combining TCP-, buffer-, and media content-related metrics as well as user requirements and expectations to be used as an input for the DASH adaptation logic. Experiments have demonstrated that the chosen model enhances the capability of the adaptation logic to select the optimal video quality level. Finally, we integrated all our findings into a real DASH system with QoE monitoring capabilities.
This paper provides the specification and an initial validation of an evaluation framework for the comparison of lossy compressors of genome sequencing quality values. The goal is to define reference data, test sets, tools and metrics that shall be used to evaluate the impact of lossy compression of quality values on human genome variant calling. The functionality of the framework is validated referring to two state-of-the-art genomic compressors. This work has been spurred by the current activity within the ISO/IEC SC29/WG11 technical committee (a.k.a. MPEG), which is investigating the possibility of starting a standardization activity for genomic information representation.
International audienceWith the emergence of the High Efficiency Video Coding (HEVC) standard, a dataflow description of the decoder part was developed as part of the MPEG-B standard. This dataflow description presented modest framerate results which led us to propose methodolo-gies to improve the performance. In this paper, we introduce architectural improvements by exposing more parallelism using YUV and frame-based parallel decoding. We also present platform optimizations based on the use of SIMD functions and cache efficient FIFOs. Results show an average acceleration factor of 5.8 in the decoding framerate over the reference architecture
In this paper we propose a design methodology to partition dataflow applications on a multi clock domain architecture. This work shows how starting from a high level dataflow representation of a dynamic program it is possible to reduce the overall power consumption without impacting the performances. Two different approaches are illustrated, both based on the post-processing and analysis of the causation trace of a dataflow program. Methodology and experimental results are demonstrated in an at-size scenario using an MPEG-4 Simple Profile decoder
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.