2021
DOI: 10.1007/978-3-030-71593-9_16
|View full text |Cite
|
Sign up to set email alerts
|

Implementation and Evaluation of CUDA-Unified Memory in Numba

Abstract: Python as a programming language is increasingly gaining importance, especially in data science, scientific, and parallel programming. With the Numba-CUDA, it is even possible to program GPUs with Python using a CUDA like programming style. However, Numba is missing support for CUDA-unified memory, which can help to simplify programming even more and allows dynamic work distribution between GPUs and CPUs. In this work, we implement and evaluate the support for unified memory in Numba. As expected, the performa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 13 publications
0
3
0
Order By: Relevance
“…Key in our research is that NumPy can perform operations on arrays of different but compatible dimensions. For example, we can add two arrays with dimensions 5x5 and 5: >>> arr1:np.ndarray = np.arange (1,26,1).reshape ((5, 5) >>> arr1 array([ [ 1,2,3,4,5], [ 6,7,8,9,10], [11,12,13,14,15], [16,17,18,19,20], [21,22,23,24,25]]) >>> arr2:np.ndarray = np.arange (1,5) >>> arr2 array( [1,2,3,4,5]) >>> np.add(arr1, arr2) array ([[ 2, 4, 6, 8, 10], [ 7,9,11,13,15], [12,14,16,18,20], [17,19,21,23,25]...…”
Section: >>> Npreshape(user_arr (-1 1)) Array([[1] [2] [3] [4] [5]])mentioning
confidence: 99%
See 1 more Smart Citation
“…Key in our research is that NumPy can perform operations on arrays of different but compatible dimensions. For example, we can add two arrays with dimensions 5x5 and 5: >>> arr1:np.ndarray = np.arange (1,26,1).reshape ((5, 5) >>> arr1 array([ [ 1,2,3,4,5], [ 6,7,8,9,10], [11,12,13,14,15], [16,17,18,19,20], [21,22,23,24,25]]) >>> arr2:np.ndarray = np.arange (1,5) >>> arr2 array( [1,2,3,4,5]) >>> np.add(arr1, arr2) array ([[ 2, 4, 6, 8, 10], [ 7,9,11,13,15], [12,14,16,18,20], [17,19,21,23,25]...…”
Section: >>> Npreshape(user_arr (-1 1)) Array([[1] [2] [3] [4] [5]])mentioning
confidence: 99%
“…Additionally, Numba. Cuda extends the capabilities of Numba by allowing developers to write CUDA code in Python, thereby enabling the utilization of NVIDIA GPUs for general-purpose processing [18]. This is a significant advancement as GPUs' parallel processing capabilities can significantly reduce computation time for specific tasks [19].…”
Section: Introductionmentioning
confidence: 99%
“…GPUs have a different computing architecture than CPUs or FPGAs, and their internal operations are mainly matrix operations. The CUDA interface in Python is implemented through the Numba on-the-fly compiler [37], [38]. In the kernel functions, variables are computed and transferred in the form of Numpy matrices, and the use of Numpy matrix variables in GPU kernel functions is somewhat different from that in CPUs, mainly in bitwise operations and value assignment.…”
Section: A Numba Parallel Library Featuresmentioning
confidence: 99%
“…Virtual memory, a widely used memory management concept, extends beyond the conventional memory structure found in desktop PCs and multicomputer systems, including shared virtual memory (SVM) systems. Additionally, it acts as a bridge between the CPU and GPU, as illustrated in the compute unified device architecture (CUDA) [3].…”
Section: Introductionmentioning
confidence: 99%