Simulation is a critical component of high energy physics research, with a corresponding computing footprint. Generative AI has emerged as a promising complement to intensive full simulation with relatively high accuracy compared to existing classical fast simulation alternatives. Such algorithms are naturally suited to acceleration on coprocessors, potentially running fast enough to match the high data volumes at next-generation experiments. Numerous techniques are currently being explored, each with its own advantages and challenges. Looking beyond the next generation, foundational building blocks of AI such as automatic differentiation and gradient descent are now being incorporated into fully differentiable programming. This new paradigm will provide revolutionary tools for designing and optimizing future experiments.