The findable, accessible, interoperable, and reusable (FAIR) data principles have provided a framework for examining, evaluating, and improving how we share data with the aim of facilitating scientific discovery. Efforts have been made to generalize these principles to research software and other digital products. Artificial intelligence (AI) models---algorithms that have been trained on data rather than explicitly programmed---are an important target for this because of the ever-increasing pace with which AI is transforming scientific and engineering domains.
We propose a practical definition of FAIR principles for AI models, create a FAIR AI project template that promotes adherence to these principles, and introduce a framework to quantify whether an AI model is FAIR. We demonstrate how to implement these principles using a concrete example from experimental high energy physics: a graph neural network for identifying Higgs bosons decaying to bottom quarks. We study the robustness of these FAIR AI models and their portability across hardware architectures and software frameworks and report new insights on the interpretability of AI predictions by studying the interplay between FAIR datasets and AI models. Enabled by publishing FAIR AI models, these studies pave the way toward reliable and automated AI-driven scientific discovery.
|Consider for long presentation||Yes|