Speaker
Description
The recent developments in ROOT/TMVA focus on fast machine learning inference, which enables analysts to deploy their machine learning models rapidly on large scale datasets. A new tool has been recently developed, SOFIE, allowing for generating C++ code for evaluation of deep learning models, which are trained from external tools such as Tensorflow or PyTorch.
While Python-based deep learning frameworks for training models in GPU environments develop and mature, SOFIE is a good solution that allows easy integration of inference of trained models into conventional C++ and CPU-based scientific computing workflows. Using this new tool, SOFIE, it is easier to integrate Machine Learning model evaluation in HEP data analysis and in particular when using tools such as RDataFrame.
We will present the recent developments of SOFIE, notably one of the latest features, the support for Graph Neural Networks. In the previous CHEP conference we have introduced SOFIE showing the support for some basic Deep learning operators. Now we have extended the support for parsing and generating C++ code for several deep learning operators commonly used in HEP and represented by the ONNX standard. Other types of architectures typically used in HEP are the Graph Neural Networks. These networks cannot easily and efficiently be represented with ONNX operators. Therefore we have developed a set of C++ classes that can represent message passing GNN architectures, which are created with common tools used in HEP such as PyTorch geometric and the Graph Nets library from DeepMind. From these classes it is then possible to generate efficient C++ code for inference of GNN’s which can be easily integrated in CPU based workflows.
We demonstrate these current capabilities of SOFIE with benchmarks in evaluating some machine leaning models used by LHC experiments.
Consider for long presentation | Yes |
---|