Please visit Jefferson Lab Event Policies and Guidance before planning your next event: https://www.jlab.org/conference_planning.

May 8 – 12, 2023
Norfolk Waterside Marriott
US/Eastern timezone

The HSF Conditions Database reference implementation

May 9, 2023, 5:30 PM
15m
Norfolk Ballroom III-V (Norfolk Waterside Marriott)

Norfolk Ballroom III-V

Norfolk Waterside Marriott

235 East Main Street Norfolk, VA 23510
Oral Track 1 - Data and Metadata Organization, Management and Access Track 1 - Data and Metadata Organization, Management and Access

Speaker

Gerlach, Lino (Brookhaven National Laboratory)

Description

The HSF Conditions Databases activity is a forum for cross-experiment discussions hoping for as broad a participation as possible. It grew out of the HSF Community White Paper work to study conditions data access, where experts from ATLAS, Belle II, and CMS converged on a common language and proposed a schema that represents best practice. The focus of the HSF work is the most difficult use case, specifically the subset of non-event data that are needed for distributed computing resources to process event data with access rates of up to 10k Hz. Following discussions with a broader community, including NP as well as HEP experiments, a core set of use cases, functionality and behaviour was defined with the aim to describe a core Conditions Database API. This contribution will describe the reference implementation of both the conditions database service and the client which together encapsulate HSF best practice conditions data handling.

Django was chosen for the service implementation, which uses an ORM instead of the direct use of SQL. The simple relational database schema to organise conditions data is implemented in PostgreSQL. The task of storing conditions data payloads themselves is outsourced to any POSIX-compliant filesystem, allowing for transparent relocation and redundancy. Crucially this design provides a clear separation between retrieving the metadata describing which conditions data are needed for a data processing job, and retrieving the actual payloads from storage. The deployment using helm on OKD will be described together with scaling tests and operations experience from the sPHENIX experiment running many 10k cores at BNL.

Consider for long presentation Yes

Primary author

Mashinistov, Ruslan (Brookhaven National Laboratory (US))

Co-authors

Gerlach, Lino (Brookhaven National Laboratory) Laycock, Paul (BNL) Formica, Andrea Govi, Giacomo (Universita e INFN, Padova (IT)) Pinkenburg, Chris (Brookhaven National Laboratory)

Presentation materials

Peer reviewing

Paper