2013
DOI: 10.1002/pmic.201300288
|View full text |Cite
|
Sign up to set email alerts
|

Distributed computing and data storage in proteomics: Many hands make light work, and a stronger memory

Abstract: Modern day proteomics generates ever more complex data, causing the requirements on the storage and processing of such data to outgrow the capacity of most desktop computers. To cope with the increased computational demands, distributed architectures have gained substantial popularity in the recent years. In this review, we provide an overview of the current techniques for distributed computing, along with examples of how the techniques are currently being employed in the field of proteomics. We thus underline… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
18
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
4
2
1

Relationship

4
3

Authors

Journals

citations
Cited by 22 publications
(19 citation statements)
references
References 86 publications
(86 reference statements)
1
18
0
Order By: Relevance
“…This is particularly problematic in the case of proteogenomics studies and metaproteomics . Distributed computing can help overcoming some of these limitations, through the use of grid or cloud computing . This way, extensive processing power can be made available to the community at large, notably through the establishment of dedicated environments, like the Galaxy project .…”
Section: Conclusion and Overviewmentioning
confidence: 99%
See 1 more Smart Citation
“…This is particularly problematic in the case of proteogenomics studies and metaproteomics . Distributed computing can help overcoming some of these limitations, through the use of grid or cloud computing . This way, extensive processing power can be made available to the community at large, notably through the establishment of dedicated environments, like the Galaxy project .…”
Section: Conclusion and Overviewmentioning
confidence: 99%
“…126 Distributed computing can help overcoming some of these limitations, through the use of grid or cloud computing. 127 This way, extensive processing power can be made available to the community at large, notably through the establishment of dedicated environments, like the Galaxy project. 128,129 It is also possible to distribute tasks on a local cluster of computers, making it possible for most labs to carry out demanding searches even with limited informatics resources.…”
Section: Conclusion and Overviewmentioning
confidence: 99%
“…In the 2010 new emphasis was given to data processing, introducing new data mining methodologies based on multivariate approaches, that is PCA, which are still used nowadays (for a review see ). In 2010 new multivariate algorithms for network building and comprehensive processing of the different omics levels were available, and PTM study allowed an unprecedented power to study biological systems, which are now mainly limited by the need of highly demanding computer systems .…”
Section: Methodological Evolution In Plant Proteomicsmentioning
confidence: 99%
“…novel variants and post-translational modifications (6)). However, in order to process these large amounts of (public) data, it is increasingly necessary to use elastic compute resources such as Linux-based cluster environments and cloud infrastructures (7).…”
Section: Introductionmentioning
confidence: 99%