Indico is back online after maintenance on Tuesday, April 30, 2024.
Please visit Jefferson Lab Event Policies and Guidance before planning your next event: https://www.jlab.org/conference_planning.

May 8 – 12, 2023
Norfolk Waterside Marriott
US/Eastern timezone

Optimizing AI-based HEP algorithms using HPC and Quantum Computing

Not scheduled
1h
Hampton Roads Ballroom and Foyer Area (Norfolk Waterside Marriott)

Hampton Roads Ballroom and Foyer Area

Norfolk Waterside Marriott

235 East Main Street Norfolk, VA 23510
Poster Poster Poster Session

Speaker

Girone, Maria (CERN)

Description

In the European Center of Excellence in Exascale Computing "Research on AI- and Simulation-Based Engineering at Exascale" (CoE RAISE), researchers from science and industry develop novel, scalable Artificial Intelligence technologies towards Exascale. In this work, we leverage European High Performance Computing (HPC) and Quantum Computing resources to perform large-scale hyperparameter optimization (HPO), multi-node distributed data-parallel training as well as benchmarking, using multiple compute nodes, each equipped with multiple GPUs.

Training and HPO of deep learning-based AI models is often compute resource intensive and calls for the use of large-scale distributed resources as well as scalable and resource efficient hyperparameter search algorithms. In addition, we present results from the development of a containerized benchmark based on an AI-model for event reconstruction that allows us to compare and assess the suitability of different hardware accelerators for training deep neural networks.

Furthermore, we explore the applicability of recent deep neural network architectures for pattern recognition on point clouds and graphs for data-intensive particle shower reconstructions during the High-Luminosity phase of the Large Hadron Collider. AI-based methods show promising physics results and scalability benefits in sample size compared to traditional approaches.

The potential to speed up the HPO process via performance prediction as well as the use of quantum annealing (QA) to train the performance predictor is investigated. We use the D-Wave Advantage™ System JUPSI at Forschungszentrum Jülich (FZJ) to train QSVR models with the aim of speeding up HPO of deep learning-based AI-models. QSVR performance comparable to a classical SVR is obtained.

To aid in testing and development, the framework has been containerized for rapid deployment on heterogeneous architectures. As an application-specific AI benchmark, this allows swapping of datasets and parameters, and facilitates direct performance comparison across diverse hardware architectures and configurations

Primary authors

Presentation materials

Peer reviewing

Paper