2023
DOI: 10.36227/techrxiv.22256704.v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

JND-aware Two-pass Per-title Encoding Scheme for Adaptive Live Streaming

Abstract: <p>Adaptive live video streaming applications utilize a predefined collection of bitrate-resolution pairs, known as a bitrate ladder, for simplicity and efficiency, eliminating the need for additional run-time to determine the optimal pairs for each video. These applications do not incorporate two-pass encoding methods due to increased latency. However, an optimized bitrate ladder could result in lower storage and delivery costs and improved Quality of Experience (QoE). This paper presents a Just Noticea… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 34 publications
0
2
0
Order By: Relevance
“…Predictive models can comprehensively understand the content complexity and characteristics by extracting relevant spatiotemporal features, such as motion vectors, texture patterns, and frameto-frame differences [21]. In this paper, three DCT-energy-based features [22], the average luma texture energy (𝐸 Y ), the average gradient of the luma texture energy (ℎ), and the average luminescence (𝐿 Y ), for each segment are extracted using open-source Video Complexity Analyzer (VCA) [6,22].…”
Section: Video Complexity Feature Extractionmentioning
confidence: 99%
See 1 more Smart Citation
“…Predictive models can comprehensively understand the content complexity and characteristics by extracting relevant spatiotemporal features, such as motion vectors, texture patterns, and frameto-frame differences [21]. In this paper, three DCT-energy-based features [22], the average luma texture energy (𝐸 Y ), the average gradient of the luma texture energy (ℎ), and the average luminescence (𝐿 Y ), for each segment are extracted using open-source Video Complexity Analyzer (VCA) [6,22].…”
Section: Video Complexity Feature Extractionmentioning
confidence: 99%
“…These representations enable a continuous adaptation of the video delivery to the client's network conditions and device capabilities [3]. The increase in the computational complexity using codecs such as High Efficiency Video Coding (HEVC) [4] and Versatile Video Coding (VVC) [5], and improvements in video characteristics such as resolution [6], framerate [7], and bit-depth raises the need to develop a large-scale, highly efficient video encoding environment [8]. This is crucial for DASH-based content provisioning as it requires encoding multiple representations of the same video content in an encoding server.…”
Section: Introductionmentioning
confidence: 99%