2018
DOI: 10.1007/978-3-319-69953-0_10
|View full text |Cite
|
Sign up to set email alerts
|

TINS: A Task-Based Dynamic Helper Core Strategy for In Situ Analytics

Abstract: The in situ paradigm proposes to co-locate simulation and analytics on the same compute node to analyze data while still resident in the compute node memory, hence reducing the need for postprocessing methods. A standard approach that proved efficient for sharing resources on each node consists in running the analytics processes on a set of dedicated cores, called helper cores, to isolate them from the simulation processes. Simulation and analytics thus run concurrently with limited interference. In this paper… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 22 publications
0
6
0
Order By: Relevance
“…As usually there is a small number of analysis functions, this exponential factor remains limited (cf. table 1 in the study by Dirand et al (2018) for an example).…”
Section: Scheduling Strategies For High-throughput Applicationsmentioning
confidence: 99%
See 3 more Smart Citations
“…As usually there is a small number of analysis functions, this exponential factor remains limited (cf. table 1 in the study by Dirand et al (2018) for an example).…”
Section: Scheduling Strategies For High-throughput Applicationsmentioning
confidence: 99%
“…Damaris (Dorier et al, 2012), FlowVR (Dreher and Raffin, 2014), Functional Partitioning (Li et al, 2010), GePSeA (Singh et al, 2009), Active Buffer (Ma et al, 2006), or SMART (Wang et al, 2015) adopt this solution. Tins (Dirand et al, 2018) introduced dynamic helper cores, dedicating cores to analysis only when analysis tasks are ready to be run. Even if the in situ processing simply consists in saving data to disks, this approach can be more efficient than to rely on standard I/O libraries like MPI I/O (Dorier et al, 2012).…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Task-based programming where the tasks are dynamically distributed to compute resources is today classical for shared memory programming using for instance OpenMP or Intel TBB. TINS [21] leverages this approach as long as the simulation is also parallelized on each node with tasks. TINS relies on the TBB work-stealing scheduler to dynamically distribute the tasks on the cores, being simulation or analytics tasks.…”
Section: Related Workmentioning
confidence: 99%