Indico is back online after maintenance on Tuesday, April 30, 2024.
Please visit Jefferson Lab Event Policies and Guidance before planning your next event: https://www.jlab.org/conference_planning.

May 8 – 12, 2023
Norfolk Waterside Marriott
US/Eastern timezone

ICARUS signal processing with HEPnOS

May 11, 2023, 3:15 PM
15m
Norfolk Ballroom III-V (Norfolk Waterside Marriott)

Norfolk Ballroom III-V

Norfolk Waterside Marriott

235 East Main Street Norfolk, VA 23510
Oral Track 1 - Data and Metadata Organization, Management and Access Track 1 - Data and Metadata Organization, Management and Access

Speaker

Syed, S. (FNAL)

Description

The LArSoft/art framework is used at Fermilab’s liquid argon time projection chamber experiments such as ICARUS to run traditional production workflows in a grid environment. It has become increasingly important to utilize HPC facilities for experimental data processing tasks. As part of the SciDAC-4 HEP Data Analytics on HPC and HEP Event Reconstruction with Cutting Edge Computing Architectures projects, we have been exploring ways to restructure HEP neutrino workflows to increase resource utilization when running at HPC facilities. Our explorations focus on taking advantage of distinct architectural features for data services, parallel application scheduling, and high CPU core counts available at these facilities. In this paper, we introduce changes needed to make use of a new system-wide event store called HEPnOS and efforts to maximize the throughput of newly available multicore algorithms with the available memory on the compute nodes. Performance results are shown for ALCF Theta using the early signal processing steps within the ICARUS production workflow.

HEPnOS is a HEP-specific distributed data store built on top of software components from the DOE-ASCR supported Mochi project. With facility-wide access to HEP event data, we can avoid processing constraints and bottlenecks present in file-based reconstruction workflows. Data stores such as HEPnOS leverage the high performance networks and memories available on HPC systems, and can help eliminate performance bottlenecks and issues that may appear when using parallel file systems.

Primary authors

Berkman, S. (FNAL) Cerati , G. (FNAL) Gartung, P. (FNAL) Paterno, M. (FNAL) Peterka, T. (ANL) Ross, R. (ANL) Sehrish, S. (FNAL) Syed, S. (FNAL) Yildiz, O. (FNAL) Kowalkowski, James (Fermi National Accelerator Laboratory)

Presentation materials