Speaker
Description
Analyses in HEP experiments often rely on large MC simulated datasets. These datasets are usually produced with full-simulation approaches based on Geant4 or exploiting parametric “fast” simulations introducing approximations and reducing the computational cost. With our work we created a prototype version of a new “fast” simulation that we named “flashsim” targeting analysis level data tiers (such as CMS NanoAOD). Such a simulation software is based on Machine Learning, in particular exploiting the Normalizing Flows generative model. We will present the physics results achieved with this prototype, currently simulating only a few physics objects collections, in terms of: 1) accuracy of object properties, 2) correlations among paris of observables, 3) comparisons of analysis level derived quantities and discriminators between full-simulation and flash-simulation of the very same events. The speed up obtained with such an approach is of several orders of magnitude, so that when using flashsim the simulation bottleneck is represented by the “generator” (e.g. Pythia) step. We further investigated upsampling techniques, reusing the same “generated event” passing it multiple times through the detector simulation, in order to understand the increase in statistical precision that could be ultimately achieved. The results achieved with the current prototype show a higher physics accuracy and a lower computing cost compared to other fast simulation approaches such as CMS standard fastsim and Delphes based simulations.
Consider for long presentation | Yes |
---|