Machine learning (ML) and deep learning (DL) are powerful tools for modeling complex systems. However, most of the standard models in ML/DL do not provide a measure of confidence or uncertainties associated with their predictions. Further, these models can only be trained on available data. During operation, models may encounter data samples poorly reflected in training data. These data samples are called Out-of-Distribution (OOD) samples, and the predictions on these can be arbitrarily wrong. Uncertainty Quantification is a technique that provides insight into a model’s confidence in predictions, including OOD samples.
Gaussian Process (GP) is a well-known ML method that provides accurate estimation of prediction uncertainties. We will present our work with GP for AI-based Experimental Control to stabilize gain measurement of the Central Drift Chamber in the GlueX experiment at Jefferson Lab.
As the number of observed data points and/or input features increases, traditional GP implementations do not scale well, and different approximation methods are applied to improve the scaling. To provide accurate uncertainty quantification for DL models, we developed and applied Deep Gaussian Process Approximation (DGPA) methods. We will discuss our work with DGPA for three different applications namely 1) uncertainty aware errant beam prediction at the Spallation Neutron Source accelerator, 2) uncertainty aware particle identification for Solenoidal Large Intensity Device experiment at Thomas Jefferson National Accelerator Facility, and 3) uncertainty aware surrogate model for the Fermi National Accelerator Laboratory booster complex.
|Consider for long presentation||Yes|