2020
DOI: 10.1109/access.2019.2961091
|View full text |Cite
|
Sign up to set email alerts
|

Improving BDD Enumeration for LWE Problem Using GPU

Abstract: In this paper, we present a GPU-based parallel algorithm for the Learning With Errors (LWE) problem using a lattice-based Bounded Distance Decoding (BDD) approach. To the best of our knowledge, this is the first GPU-based implementation for the LWE problem. Compared to the sequential BDD implementation of Lindner-Peikert and pruned-enumeration strategies by Kirshanova [1], our GPU-based implementation is almost faster by a factor 6 and 9 respectively. The used GPU is NVIDIA GeForce GTX 1060 6G. We also provide… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 31 publications
0
1
0
Order By: Relevance
“…e second strategy is based on expecting the best level α on which the cost of transferring the data between CPU and GPU is very low, while the third strategy is based on gaining the improvement based on GPU, which can be carried out by generating some subtrees in GPU rather than in CPU. e main idea of the third strategy is similar to another one used for improving BDD Enumeration [33].…”
Section: Our Contributionmentioning
confidence: 99%
“…e second strategy is based on expecting the best level α on which the cost of transferring the data between CPU and GPU is very low, while the third strategy is based on gaining the improvement based on GPU, which can be carried out by generating some subtrees in GPU rather than in CPU. e main idea of the third strategy is similar to another one used for improving BDD Enumeration [33].…”
Section: Our Contributionmentioning
confidence: 99%