Speaker
Description
Many large-scale physics experiments, such as ATLAS at the Large Hadron Collider, Deep Underground Neutrino Experiment and sPHENIX at the Realistic Heavy Ion Collider, rely on accurate simulations to inform data analysis and derive scientific results. Their inevitable inaccuracies may be detected and corrected using heuristics in a conventional analysis workflow.
However, residual errors introduce intractable bias when the simulations are used to train Artificial Intelligence/Machine Learning (AI/ML) methodologies and those trained models are used to infer data from real detectors. Our goal is to develop a physics-informed ML framework that can bridge the gap between simulations and experiments. We realize this goal by applying Generative Adversarial Networks to transform data between different domains to augment existing simulations and to extract subtle differences that may eventually help improve our knowledge on the underlying physics process. Our initial effort demonstrated the feasibility of this approach using a Vision Transformer augmented U-Net on toy data from Liquid Argon Time Projection Chamber simulations. In this talk, we present the latest results on this work, including investigations for the best neural network architectures, the efforts on optimization and stabilization of the performance and initial results on benchmark datasets in the computer vision field as well as on realistic data from large scale physics experiments.