In this talk, we discuss the evolution of the computing model of the ATLAS experiment at the LHC. After LHC Run 1, it became obvious that the available computing resources at the WLCG were fully used. The processing queue could reach millions of jobs during peak loads, for example before major scientific conferences and during large scale data processing. The unprecedented performance of the LHC during Run 2 and subsequent large data volumes required more computing power than the WLCG consortium pledged. In addition to unpledged/opportunistic resources available through the grid, the integration of resources such as supercomputers and cloud computing with the ATLAS distributed computing model has led to significant changes in both the workload management system and the data management system, thereby changing the computing model as a whole. The implementation of the data carousel model and data on-demand, cloud and HPC integration, and other innovations expanded the physics capabilities of experiments in the field of high energy physics and made it possible to implement bursty data simulation and processing. In the past few years ATLAS, and many other High Energy (HEP) or Nuclear Physics (NP) and Astroparticle experiments, evaluated commercial clouds as an additional part of their computing resources. In this talk, we will briefly describe the ATLAS-Google and ATLAS-Amazon projects and how they were fully integrated with the ATLAS computing model. We will try to answer a fundamental question about the future computing model for experiments with large data volumes and distributed computing resources by considering three possible options:
- HEP/NP experiments will continue to own and use pledged resources
- HEP/NP experiments will buy resources from commercial providers
- HEP/NP experiments will own core resources and buy additional resources from commercial providers.
|Consider for long presentation||Yes|