Proceedings of the 21st ACM SIGPLAN International Conference on Functional Programming 2016
DOI: 10.1145/2951913.2951935
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical memory management for parallel programs

Abstract: An important feature of functional programs is that they are parallel by default. Implementing an efficient parallel functional language, however, is a major challenge, in part because the high rate of allocation and freeing associated with functional programs requires an efficient and scalable memory manager. In this paper, we present a technique for parallel memory management for strict functional languages with nested parallelism. At the highest level of abstraction, the approach consists of a technique to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
27
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
3
3
1

Relationship

4
3

Authors

Journals

citations
Cited by 31 publications
(27 citation statements)
references
References 47 publications
(14 reference statements)
0
27
0
Order By: Relevance
“…MLton is a mature, wholeprogram optimizing compiler which produces fast sequential code. The parallel extension to MLton was developed a number of years ago, but is being actively maintained and improved Raghunathan et al 2016]. Furthermore, MLton makes it easy to develop custom schedulers written directly in Standard ML with the use of its built-in library for user-level threads.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…MLton is a mature, wholeprogram optimizing compiler which produces fast sequential code. The parallel extension to MLton was developed a number of years ago, but is being actively maintained and improved Raghunathan et al 2016]. Furthermore, MLton makes it easy to develop custom schedulers written directly in Standard ML with the use of its built-in library for user-level threads.…”
Section: Methodsmentioning
confidence: 99%
“…These developments have led to increased interest in languages and techniques for writing parallel programs. Many languages, libraries and systems have been developed, including NESL [Blelloch et al 1994], Cilk [Frigo et al 1998], Fork/Join Java [Lea 2000], Habanero Java [Imam and Sarkar 2014], TPL [Leijen et al 2009], TBB [Intel 2011], X10 [Charles et al 2005], parallel ML [Fluet et al 2011[Fluet et al , 2008Raghunathan et al 2016;Sivaramakrishnan et al 2014], and parallel Haskell [Keller et al 2010;Kuper et al 2014].…”
Section: Introductionmentioning
confidence: 99%
“…Local heaps have been used in the context of garbage collection to reduce the amount of synchronisation required before [1][2][3]13,15,24,31,34], where different threads have their own heap and share a global heap. However, only two of these have been proved correct.…”
Section: Definition 13 (I 7 )mentioning
confidence: 99%
“…Implicit threading seeks to make parallel programming easier by delegating certain tedious but important details, such as the scheduling of parallel tasks, to the compiler and the run-time system. There has been much work on specialized programming languages and language extensions for parallel systems, for various forms of implicit threading, including OpenMP [OpenMP Architecture Review Board 2008], Cilk [Frigo et al 1998], Fork/Join Java [Lea 2000], Habanero Java [Imam and Sarkar 2014], NESL [Blelloch et al 1994],TPL [Leijen et al 2009], TBB [Intel 2011], X10 [Charles et al 2005], parallel ML [Fluet et al 2011;Jagannathan et al 2010;Raghunathan et al 2016], and parallel Haskell [Chakravarty et al 2007;Keller et al 2010].…”
Section: Introductionmentioning
confidence: 99%