2008
DOI: 10.1007/978-3-540-88436-1_38
|View full text |Cite
|
Sign up to set email alerts
|

GPU-MEME: Using Graphics Hardware to Accelerate Motif Finding in DNA Sequences

Abstract: Abstract. Discovery of motifs that are repeated in groups of biological sequences is a major task in bioinformatics. Iterative methods such as expectation maximization (EM) are used as a common approach to find such patterns. However, corresponding algorithms are highly compute-intensive due to the small size and degenerate nature of biological motifs. Runtime requirements are likely to become even more severe due to the rapid growth of available gene transcription data. In this paper we present a novel approa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
6
0

Year Published

2010
2010
2018
2018

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(6 citation statements)
references
References 9 publications
0
6
0
Order By: Relevance
“…The Multiple Expectation Maximization for Motif Elicitation (MEME) is a popular and efficient approximate algorithm used by researchers. Chen et al [10] have accelerated this algorithm using GPUs and have achieved significant speed up over its parallel version. The authors claim that more speed up can be achieved by using a cluster of GPUs.…”
Section: Related Workmentioning
confidence: 99%
“…The Multiple Expectation Maximization for Motif Elicitation (MEME) is a popular and efficient approximate algorithm used by researchers. Chen et al [10] have accelerated this algorithm using GPUs and have achieved significant speed up over its parallel version. The authors claim that more speed up can be achieved by using a cluster of GPUs.…”
Section: Related Workmentioning
confidence: 99%
“…Because these GPUs do not have to perform many of the generalized tasks that a CPU must perform, they have become highly optimized to perform tightly-coupled data-parallel processing with many, typically hundreds, of independent processor units and specialized memory addressing. GPU algorithms have been developed for many years for computational geometry tasks as part of graphics rendering, but it is only in the past few years where GPUs have been used for other tasks such as sequence analysis [ 12 - 14 ], machine learning [ 15 ] and molecular dynamics [ 16 , 17 ]. All of the early implementations had to contend with the constraints and difficulties of the limited programming environment available on the GPU, however this has changed in just the past couple of years.…”
Section: Introductionmentioning
confidence: 99%
“…However, GPUs have limited instructions and limited parallelism relative to FPGA's configurability. The research in [10] employed acceleration using GPU. Another approach uses clusters of workstations [12].…”
Section: Introductionmentioning
confidence: 99%