2016
DOI: 10.1109/tc.2015.2458859
|View full text |Cite
|
Sign up to set email alerts
|

Power/Performance Trade-Offs in Real-Time SDRAM Command Scheduling

Abstract: Abstract-Real-time safety-critical systems should provide hard bounds on an applications' performance. SDRAM controllers used in this domain should therefore have a bounded worst-case bandwidth, response time, and power consumption. Existing works on real-time SDRAM controllers only consider a narrow range of memory devices, and do not evaluate how their schedulers' performance varies across memory generations, nor how the scheduling algorithm influences power usage. The extent to which the number of banks use… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
3

Relationship

3
4

Authors

Journals

citations
Cited by 14 publications
(15 citation statements)
references
References 32 publications
0
15
0
Order By: Relevance
“…Since memory controllers have common components, e.g., the timing constraint counters and command bus, the corresponding TA can be reused when modeling other memory controllers. 4) Is easily adapted to different memory generations (e.g., DDR3 and LPDDR3) by replacing the timing constraint values for the specific memory device [8]. The Source in Fig.…”
Section: A Overview Of the Ta Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…Since memory controllers have common components, e.g., the timing constraint counters and command bus, the corresponding TA can be reused when modeling other memory controllers. 4) Is easily adapted to different memory generations (e.g., DDR3 and LPDDR3) by replacing the timing constraint values for the specific memory device [8]. The Source in Fig.…”
Section: A Overview Of the Ta Modelmentioning
confidence: 99%
“…For each transaction size, BI and BC are configured to achieve the lowest execution time [23]. BS is aligned with BI to simplify the physical address decoding [8]. When the memory mapping is finished, the number (i.e., NrTrans) of transactions in the back-end is increased and command scheduling is triggered.…”
Section: ) Automata Templates Of the Requestors And Front-endmentioning
confidence: 99%
See 1 more Smart Citation
“…These commands are sequentially buffered in the queue per bank. This repeats BI times for all the required consecutive banks, and the data access granularity is BI ×BC ×BL words [17]. By varying BI and BC, different transaction sizes are supported [4].…”
Section: A Real-time Memory Controllermentioning
confidence: 99%
“…The size is mapped to BI and BC, while the physical address gives the starting bank (b s ). For example, when a system only generates 64 byte read and write transactions, e.g., the L2 cache line size is 64 bytes for all cores, the most efficient configuration of BI=4 and BC=1 is used to access a DDR3 SDRAM with a 16-bit data bus [17]. Since DDR3 SDRAMs have 8 banks, the transactions may either interleave consecutively over Bank 0 to Bank 3 or Bank 4 to Bank 7 for alignment reasons [17].…”
Section: B the Analysis Of Wcbwmentioning
confidence: 99%