2018
DOI: 10.3384/diss.diva-143802
|View full text |Cite
|
Sign up to set email alerts
|

Efficient HTTP-based Adaptive Streaming of Linear and Interactive Videos

Abstract: Online video streaming has gained tremendous popularity over recent years and currently constitutes the majority of Internet traffic. As large-scale ondemand streaming continues to gain popularity, several important questions and challenges remain unanswered. This thesis addresses open questions in the areas of efficient content delivery for HTTP-based Adaptive Streaming (HAS) from different perspectives (client, network and content provider) and in the design, implementation, and evaluation of interactive s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 132 publications
(166 reference statements)
0
3
0
Order By: Relevance
“…In this study, our focus is on quantizing weights and activations. While various approaches have explored 4-bit [40], [41], binary [42]- [46], and adaptive [47], [48] quantization, our work centers on 8-bit quantization, which is widely supported by most microcontrollers (MCUs). The quantization of parameters offers the following advantages: 1.…”
Section: B Network Quantizationmentioning
confidence: 99%
See 1 more Smart Citation
“…In this study, our focus is on quantizing weights and activations. While various approaches have explored 4-bit [40], [41], binary [42]- [46], and adaptive [47], [48] quantization, our work centers on 8-bit quantization, which is widely supported by most microcontrollers (MCUs). The quantization of parameters offers the following advantages: 1.…”
Section: B Network Quantizationmentioning
confidence: 99%
“…The parameters quantization can either be performed by retraining the neural network model, a process that is called Quantization-Aware Training [49], [50], or done without retraining, a process that is often referred to as Post-Training Quantization [41], [51], [52]. In this work, we use the Quantization-Aware Training [39], [53] since it has proven to achieve higher accuracy value [40].…”
Section: Less Working Memory and Cache For Activationsmentioning
confidence: 99%
“…• Computational Efficiency: For models like DeepLabV3 with high FLOPs, adopting techniques such as pruning, quantization, or knowledge distillation could reduce the computational load without significantly compromising accuracy (Li H. et al, 2016;Krishnamoorthi, 2018;Gou et al, 2021;Kim et al, 2021).…”
Section: Recommendations For Performance Enhancementmentioning
confidence: 99%
“…In addition, cloud computing service models serve as a deployment reference: Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), as well as essential features, namely, ondemand self-service, extensive network access, resource pooling, rapid elasticity, and measured service [14]. A highly-used term is video streaming, a technique that allows customers to start playing a video without having to download the entire file [15]. For example, Netflix is one of the world's leading streaming platforms [16], [17].…”
Section: Introductionmentioning
confidence: 99%