Speaker
Description
Significant progress has been made in applying graph neural networks (GNNs) and other geometric ML ideas to the track reconstruction problem. State-of-the-art results are obtained using approaches such as the Exatrkx pipeline, which currently applies separate edge construction, classification and segmentation stages. One can also treat the problem as an object condensation task, and cluster hits into tracks in a single stage, such as in the GravNet architecture. However, condensation with such an architecture may still require non-differentiable operations. In this work, we extend the ideas of geometric attention applied in the GravNetNorm architecture to the task of fully geometric (and therefore fully differentiable) end-to-end track reconstruction in one step.
To realize this goal, we introduce a novel condensation loss function called Influencer Loss, which allows an embedded representation of tracks to be learned in tandem with the most representative hit(s) in each track. This loss has global optima that formally match the task of track reconstruction, namely smooth condensation of tracks to a single point, and we demonstrate this empirically on the TrackML dataset. We combine the Influencer approach with geometric attention to build an Influencer pooling operation, that allows a GNN to learn a hierarchy of hits-to-tracks in a fully differentiable fashion. Finally, we show how these ideas naturally lead to a representation of collision point clouds that can be used for downstream predictive and generative tasks.
Consider for long presentation | Yes |
---|