Speaker
Jitao Xu
Description
Diffusion-based generative models have recently emerged as a powerful alternative to GANs, VAEs, and normalizing flows for learning complex, high-dimensional physics distributions. After briefly introducing the forward–reverse noising process, we demonstrate how a conditional diffusion model can replace the costly Monte-Carlo event generator that maps quantum-correlation-function parameters to observable scattering events in deep-inelastic-scattering simulations, achieving higher fidelity and more stable training than a baseline conditional GAN.
Author
Jitao Xu