2016
DOI: 10.1007/s11265-016-1160-3
|View full text |Cite
|
Sign up to set email alerts
|

Survey on Algorithm and VLSI Architecture for MPEG-Like Video Coder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 124 publications
0
7
0
Order By: Relevance
“…These MBs are further divided into smaller sub-blocks of size 4 × 4. Deblocking filter is applied to the reconstructed video frames to improve the visual quality of the video where the vertical edges of every 4×4 block in a MB is filtered followed by the horizontal edges of these sub-blocks [3]. The luma MB is first filtered both vertically and horizontally, followed by Chroma Cb and Chroma Cr [4].…”
Section: Deblocking Filter Algorithm For H264/avcmentioning
confidence: 99%
“…These MBs are further divided into smaller sub-blocks of size 4 × 4. Deblocking filter is applied to the reconstructed video frames to improve the visual quality of the video where the vertical edges of every 4×4 block in a MB is filtered followed by the horizontal edges of these sub-blocks [3]. The luma MB is first filtered both vertically and horizontally, followed by Chroma Cb and Chroma Cr [4].…”
Section: Deblocking Filter Algorithm For H264/avcmentioning
confidence: 99%
“…Even if we improve the communication performance of the network, the lossy compression part will become the bottleneck to transfer the video stream to the abnormal detection part due to the heavy calculation of the compression processes. To overcome the difficulty of improving the lossy compression performance, we can accelerate the compressor and the decompressor by applying GPUs [ 15 ] and dedicated hardware [ 16 ]. However, the lossy compression reduces information of pixels in frames and guarantees the bandwidth of the video stream.…”
Section: Introductionmentioning
confidence: 99%
“…It is well known that hardwired throughput is a crucial consideration factor in terms of hardware architecture design, especially for real-time video coder with RD optimization [14], [32]. Parallel processing and pipelining are crucial techniques for efficient implementation with specific throughput constraint on hardware platforms [14], [32]. Efficient pipelining for SDQ is challenged by the inherent data dependencies in trellis search and context state transition in CABAC [14].…”
Section: Introductionmentioning
confidence: 99%