2012
DOI: 10.1088/1742-6596/396/5/052025
|View full text |Cite
|
Sign up to set email alerts
|

Evolution of grid-wide access to database resident information in ATLAS using Frontier

Abstract: The ATLAS experiment deployed Frontier technology worldwide during the initial year of LHC collision data taking to enable user analysis jobs running on the Worldwide LHC Computing Grid to access database resident data. Since that time, the deployment model has evolved to optimize resources, improve performance, and streamline maintenance of Frontier and related infrastructure. In this presentation we focus on the specific changes in the deployment and improvements undertaken such as the optimization of cache … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 2 publications
(2 reference statements)
0
3
0
Order By: Relevance
“…The problem was found to be proportional to distance along the WAN (wide area network) compounded by Oracle communication protocols needing back and forth communication between client and server beyond a simple request-response interaction. During the same period, we had already been in the process of evaluating Frontier technology [6] which is a caching system which distributes data from data sources (in our case, the Oracle databases) to distributed clients (in our case, tasks on the grid) using straight-forward HTTP protocols. Using a prototype Frontier deployment at one of the Tier 1 sites, our colleagues in Tokyo ran dedicated tests on their local grid site comparing database access times using the three available access methods: A graphical representation of the database access times observed from event processing tasks on local Tokyo worker nodes to three database sources: a local SQLite file, a Frontier deployment in New York (USA), and direct database access in Lyon ( France).…”
Section: Run 1 Database Access From the Gridmentioning
confidence: 99%
“…The problem was found to be proportional to distance along the WAN (wide area network) compounded by Oracle communication protocols needing back and forth communication between client and server beyond a simple request-response interaction. During the same period, we had already been in the process of evaluating Frontier technology [6] which is a caching system which distributes data from data sources (in our case, the Oracle databases) to distributed clients (in our case, tasks on the grid) using straight-forward HTTP protocols. Using a prototype Frontier deployment at one of the Tier 1 sites, our colleagues in Tokyo ran dedicated tests on their local grid site comparing database access times using the three available access methods: A graphical representation of the database access times observed from event processing tasks on local Tokyo worker nodes to three database sources: a local SQLite file, a Frontier deployment in New York (USA), and direct database access in Lyon ( France).…”
Section: Run 1 Database Access From the Gridmentioning
confidence: 99%
“…Among the other changes made in the model, introducing the Frontier/Squid has enabled remote access to the databases at Tier-0 and Tier-1 centres from any site [8]. It is now possible, for example, to run reprocessing jobs that require detector conditions data at Tier-2 centres.…”
Section: Adjusting the Atlas Computing Modelmentioning
confidence: 99%
“…A key component for grid-wide processing of ATLAS [1] event data is the global distribution of information from ATLAS databases which is necessary for that processing. Since the deployment of the Frontier technology [2] at grid sites early in Run 1, clients anywhere on the WLCG have benefited from efficient access to database-resident data, enabling a wide variety of event processing and analysis.…”
Section: Introductionmentioning
confidence: 99%