2000
DOI: 10.1007/3-540-39997-6_8
|View full text |Cite
|
Sign up to set email alerts
|

Time-Sharing Parallel Jobs in the Presence of Multiple Resource Requirements

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2001
2001
2020
2020

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 20 publications
0
4
0
Order By: Relevance
“…Dynamic co-scheduling boosts the priority level of processes receiving communications, on the theory that this will keep parallel applications loaded across multiple nodes during periods of intense interactive communication [58]. Buffered co-scheduling buffers nonblocking communication events and waits to send messages until a globally coordinated synchronization event [59]. Coordinated co-scheduling is a more advanced technique that takes into account both sender and receiver side events when making local scheduling decisions [60].…”
Section: Co-schedulingmentioning
confidence: 99%
“…Dynamic co-scheduling boosts the priority level of processes receiving communications, on the theory that this will keep parallel applications loaded across multiple nodes during periods of intense interactive communication [58]. Buffered co-scheduling buffers nonblocking communication events and waits to send messages until a globally coordinated synchronization event [59]. Coordinated co-scheduling is a more advanced technique that takes into account both sender and receiver side events when making local scheduling decisions [60].…”
Section: Co-schedulingmentioning
confidence: 99%
“…Batch schedulers such as the Maui Scheduler [22], NQE [35], the Portable Batch System [18] and experimental systems [7,33] as well as schedulers for privately owned networks of workstations and Grids, such as Condor-G [16] and the GrADS scheduling framework [12], use admission control schemes which schedule a job only on nodes with enough memory. This avoids thrashing at the cost of reduced utilization of memory and potentially higher job waiting times.…”
Section: Related Workmentioning
confidence: 99%
“…Even tighter integration between communication and scheduling is used in the "buffered coscheduling" scheme proposed by Petrini and Feng [35,36]. In this scheme the execution of all jobs is partitioned by the system into phases.…”
Section: System Integrationmentioning
confidence: 99%