2006 IEEE International Conference on Multimedia and Expo 2006
DOI: 10.1109/icme.2006.262510
|View full text |Cite
|
Sign up to set email alerts
|

A High Throughput VLSI Architecture Design for H.264 Context-Based Adaptive Binary Arithmetic Decoding with Look Ahead Parsing

Abstract: In this paper we present a high throughput VLSI architecture design for Context-based Adaptive Binary Arithmetic Decoding (CABAD) in MPEG-4 AVC/H.264. To speed-up the inherent sequential operations in CABAD, we break down the processing bottleneck by proposing a look-ahead codeword parsing technique on the segmenting context tables with cache registers, which averagely reduces up to 53% of cycle count. Based on a 0.18 m CMOS technology, the proposed design outperforms the existing design by both reducing 40% o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2007
2007
2011
2011

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(5 citation statements)
references
References 4 publications
0
5
0
Order By: Relevance
“…Techniques to reduce the latency and data dependency of CABAD have been widely discussed in the literature and they follow five basic approaches: pipeline; contexts pre-fetching and cache; elimination of renormalization loop; parallel decoding engines; and memory organization. The pipeline strategy is used in [4] to increase the bins/cycle rate. An alternative to solve the latency of renormalization process is presented in [5].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Techniques to reduce the latency and data dependency of CABAD have been widely discussed in the literature and they follow five basic approaches: pipeline; contexts pre-fetching and cache; elimination of renormalization loop; parallel decoding engines; and memory organization. The pipeline strategy is used in [4] to increase the bins/cycle rate. An alternative to solve the latency of renormalization process is presented in [5].…”
Section: Related Workmentioning
confidence: 99%
“…High efficiency in the decoding process using pre-fetching and cache contexts is discussed in [6] and [9], respectively. Memory optimization and reorganization are addressed in [4].…”
Section: Related Workmentioning
confidence: 99%
“…The pipeline strategy is employed by [14] for increasing rate of bins/cycle. An alternative to solve the latency of renormalization process is presented in [2].…”
Section: Related Workmentioning
confidence: 99%
“…High efficiency of decoding process employing pre-fetching and cache contexts is discussed in [15] and [16], respectively. The memory optimization and reorganization are addressed in [14].…”
Section: Related Workmentioning
confidence: 99%
“…Accordingly, we propose a new parallelism processing approach and an additional prediction method. [5] one-bin yes no no [7] multi-bin no yes no [8] two-bin yes yes no [9] one-bin yes yes no [10] multi-bin no yes no [11] two-bin unknown yes yes [13] one-bin yes yes yes [14] two-bin yes yes yes…”
Section: Introductionmentioning
confidence: 99%