2003 IEEE Workshop on Signal Processing Systems (IEEE Cat. No.03TH8682)
DOI: 10.1109/sips.2003.1235690
|View full text |Cite
|
Sign up to set email alerts
|

A mixed QoS SDRAM controller for FPGA-based high-end image processing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 20 publications
(10 citation statements)
references
References 5 publications
0
10
0
Order By: Relevance
“…While high bandwidth can always be reached by a sufficiently long pipeline, latency is the key problem to reduce cache stalling. The Sonics IP is a complex, 7 stage architecture that has an inherently longer latency than the lean 2-stage architecture of similar clock frequency used in [8] and in this paper. Even though the authors of [18] highlight the importance of low latency access to avoid cache delays, they do not elaborate on the latency effect of the relatively long pipeline (In [18] only memory bandwidth and no latencies are published).…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…While high bandwidth can always be reached by a sufficiently long pipeline, latency is the key problem to reduce cache stalling. The Sonics IP is a complex, 7 stage architecture that has an inherently longer latency than the lean 2-stage architecture of similar clock frequency used in [8] and in this paper. Even though the authors of [18] highlight the importance of low latency access to avoid cache delays, they do not elaborate on the latency effect of the relatively long pipeline (In [18] only memory bandwidth and no latencies are published).…”
Section: Related Workmentioning
confidence: 99%
“…Figure 2 shows the block diagram. A more detailed description of the general architecture can be found in [8], this chapter focuses on the QoS implementation.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…An approach to increase data parallelism by using a multi-port cache is also implemented [15]. Another kind of memory-control interface in FPGA was proposed, in which parallel units have access to memory concurrently and achieve high bandwidth use [16]. Liu et al developed a technique to optimize memory access to scratchpad memories (SPMs) within loop nests in order to take advantage of data re-use [17].…”
Section: Introductionmentioning
confidence: 99%