2011
DOI: 10.1098/rsta.2011.0151
|View full text |Cite
|
Sign up to set email alerts
|

Efficient large-scale replica-exchange simulations on production infrastructure

Abstract: Replica-exchange (RE) algorithms are used to understand physical phenomena-ranging from protein folding dynamics to binding affinity calculations. They represent a class of algorithms that involve a large number of loosely coupled ensembles, and are thus amenable to using distributed resources. We develop a framework for RE that supports different replica pairing (synchronous versus asynchronous) and exchange coordination mechanisms (centralized versus decentralized) and which can use a range of production cyb… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2011
2011
2020
2020

Publication Types

Select...
4
3

Relationship

3
4

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 10 publications
0
8
0
Order By: Relevance
“…15 This enables the efficient execution of a range of replica exchange schemes. An early prototype of the software system used for performing current simulations using the current asynchronous protocol has been described in Ref.…”
Section: Methodsmentioning
confidence: 99%
“…15 This enables the efficient execution of a range of replica exchange schemes. An early prototype of the software system used for performing current simulations using the current asynchronous protocol has been described in Ref.…”
Section: Methodsmentioning
confidence: 99%
“…One example is a SAGA-based Pilot-Job called BigJob [23], which provides the ability to run multiple concurrent large-scale simulations; the advert-service provides the coordination capabilities, and can be used to manage more complicated coordinated workflows (for example replica exchange or ensemble methods with interdependent ensemble members). As shown in [12,13], this approach can be used to scale-out to hundreds of replicas/ensemble-members over multiple supercomputers concurrently.…”
Section: Saga and Bigjobmentioning
confidence: 98%
“…This is a common problem in high throughput computing (HTC), in that the requirement is to optimise the time to completion of a bag of tasks rather than the run time of any individual job. The infrastructure utilised in this study provides a general model that could also be used to facilitate simulation techniques in which some level of coupling between different simulations, such as replica exchange molecular modelling [12,13], is involved.…”
Section: Introductionmentioning
confidence: 99%
“…A metric for the simplicity of the model is the number of elements of the model, which is very low with four main concepts. The design of the pilot-abstraction Pilot-Job [63], [6] Pilot-Data [66] Pilot-Hadoop [67] Pilot-Memory [ Conceptual model [6], architecture model [63] Conceptual model [6], architecture model [66] Architecture model [67] Architecture model [68] Architecture model [32] Performance Adaptive Replica Exchange [48], [72], Ensemble Kalman Filter simulations [50], HIV binding [49], science portals [51], Pilot-MapReduce [54] Genome Sequencing, K-Mean [66], [55] Wordcount, K-Means K-Means Light source data reconstruction, K-Means…”
Section: A Problem Identification and Design Evaluation (Eval 1/2)mentioning
confidence: 99%