The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
Proceedings of the Design Automation &Amp; Test in Europe Conference 2006
DOI: 10.1109/date.2006.243950
|View full text |Cite
|
Sign up to set email alerts
|

Communication-aware allocation and scheduling framework for stream-oriented multi-processor systems-on-chip

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
35
0

Year Published

2006
2006
2019
2019

Publication Types

Select...
3
3
3

Relationship

1
8

Authors

Journals

citations
Cited by 64 publications
(36 citation statements)
references
References 26 publications
0
35
0
Order By: Relevance
“…Rong et al utilize integer linear programming for finding the optimal voltage schedule and task ordering for a system with a single core and peripheral devices [19]. In [21], the MPSoC scheduling problem is solved with the objectives of minimizing the data transfer on the bus and guaranteeing deadlines for the average case. Minimizing energy on MPSoCs using dynamic voltage scaling (DVS) has been formulated using a two-phase framework in [28].…”
Section: Related Workmentioning
confidence: 99%
“…Rong et al utilize integer linear programming for finding the optimal voltage schedule and task ordering for a system with a single core and peripheral devices [19]. In [21], the MPSoC scheduling problem is solved with the objectives of minimizing the data transfer on the bus and guaranteeing deadlines for the average case. Minimizing energy on MPSoCs using dynamic voltage scaling (DVS) has been formulated using a two-phase framework in [28].…”
Section: Related Workmentioning
confidence: 99%
“…An adaptation of the list scheduling heuristic has been proposed in [24] in the context of DSP processors. In [25,6] methods based on ILP/CP decomposition are used to find accurate solutions to mapping/scheduling problems. They take more realistic constraints into account but do not explore pipelining as we do.…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, reads and writes on the same queue are linked from The way reading and writing activities are scheduled heavily depends on the task graph structure. If we restrict our analysis to pipelined task graphs (i.e., dependences among tasks are such that they are logically ordered in a pipeline, as in [2] and [8]), then input data reading activities can be considered tightly coupled with the computation activities of each task. Therefore, tasks writing their output data to shared memory just have their execution time increased by a quantity W CN W /f m , where W CN W is the number of clock cycles for writing data (it depends on the amount of data to write) between a task and its successor in the pipeline and f m is the frequency of the clock when task t is performed.…”
Section: B Subproblem Modelmentioning
confidence: 99%