2014 International Conference on Advances in Electrical Engineering (ICAEE) 2014
DOI: 10.1109/icaee.2014.6838557
|View full text |Cite
|
Sign up to set email alerts
|

ParaRMS algorithm: A parallel implementation of rate monotonic scheduling algorithm using OpenMP

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 5 publications
0
3
0
Order By: Relevance
“…Also, it can be lower than the actual achievable bound given a specific system, and does not fit our concept. (A1: All processors are allocated to single CPUs based on period, A3: There is no data dependence among processes, A4: The execution time for each process is constant) [39]. To reliably perform schedulability, we verified with the exact schedulability method [40], [41] and determined it as the most accurate and reliable method for rate monotonic verification.…”
Section: Implementation and Analysismentioning
confidence: 99%
“…Also, it can be lower than the actual achievable bound given a specific system, and does not fit our concept. (A1: All processors are allocated to single CPUs based on period, A3: There is no data dependence among processes, A4: The execution time for each process is constant) [39]. To reliably perform schedulability, we verified with the exact schedulability method [40], [41] and determined it as the most accurate and reliable method for rate monotonic verification.…”
Section: Implementation and Analysismentioning
confidence: 99%
“…Open MP [10][11][12][13][14][15][16][17] sample, sample, sample, sample ----if number of cores is 4.…”
Section: Overview Of Open Mpmentioning
confidence: 99%
“…Measure the various performance parameters (Like Total Execution time or Run time, Cache memory size, hit ratio, CPU utilization, etc) Step5: Identify the concurrency in the above application code (Independent sub tasks) Step6: Write the parallel code using parallel programming languages (Open MP) Step7: Run the above code using multicore machines Step8: Measure the various performance parameters (Like Total Execution time or Run time, Cache memory size, hit ratio, CPU utilization, etc) Step9: Rerun the above code using various thread size or processors cores (2,4,8,16, 32, n) and make a comparative analysis table.…”
Section: Overview Of Open Mpmentioning
confidence: 99%