Speaker
Description
To better understand experimental conditions and performances of the Large Hadron Collider (LHC), CERN experiments execute tens of thousands of loosely-coupled Monte Carlo simulation workflows per hour on hundreds of thousands - small to mid-size - distributed computing resources federated by the Worldwide LHC Computing Grid (WLCG). While this approach has been reliable during the first LHC runs, WLCG will be limited to meet future computing needs. In the meantime, High-Performance Computing resources, and more specifically supercomputers, offers a significant additional amount of computing resources but they also come with higher integration challenges.
This state-of-practice paper outlines years of integration of LHCb simulation workflows on several supercomputers. The main contributions of this paper are: (i) an extensive description of the gap to address to run High-Energy Physics Monte Carlo simulation workflows on supercomputers; (ii) various methods and proposals to submit High-Throughput Computing workflows and maximize the use of allocated CPU resources; (iii) a comprehensive analysis of LHCb production workflows running on diverse supercomputers.
Consider for long presentation | No |
---|