Please visit Jefferson Lab Event Policies and Guidance before planning your next event: https://www.jlab.org/conference_planning.
May 8 – 12, 2023
Norfolk Waterside Marriott
US/Eastern timezone

End-to-End Geometric Representation Learning for Track Reconstruction

May 9, 2023, 2:00 PM
15m
Hampton Roads VII (Norfolk Waterside Marriott)

Hampton Roads VII

Norfolk Waterside Marriott

235 East Main Street Norfolk, VA 23510
Oral Track 9 - Artificial Intelligence and Machine Learning Track 9 - Artificial Intelligence and Machine Learning

Speaker

Calafiura, Paolo (LBNL)

Description

Significant progress has been made in applying graph neural networks (GNNs) and other geometric ML ideas to the track reconstruction problem. State-of-the-art results are obtained using approaches such as the Exatrkx pipeline, which currently applies separate edge construction, classification and segmentation stages. One can also treat the problem as an object condensation task, and cluster hits into tracks in a single stage, such as in the GravNet architecture. However, condensation with such an architecture may still require non-differentiable operations. In this work, we extend the ideas of geometric attention applied in the GravNetNorm architecture to the task of fully geometric (and therefore fully differentiable) end-to-end track reconstruction in one step.

To realize this goal, we introduce a novel condensation loss function called Influencer Loss, which allows an embedded representation of tracks to be learned in tandem with the most representative hit(s) in each track. This loss has global optima that formally match the task of track reconstruction, namely smooth condensation of tracks to a single point, and we demonstrate this empirically on the TrackML dataset. We combine the Influencer approach with geometric attention to build an Influencer pooling operation, that allows a GNN to learn a hierarchy of hits-to-tracks in a fully differentiable fashion. Finally, we show how these ideas naturally lead to a representation of collision point clouds that can be used for downstream predictive and generative tasks.

Consider for long presentation Yes

Primary authors

Murnane, Daniel (Lawrence Berkeley National Laboratory) Calafiura, Paolo (LBNL) Ju, Xiangyang (Lawrence Berkeley National Laboratory) Mr Pham, Tuan Minh (University of Wisconsin-Madison) Liu, Ryan (UC Berkeley) Farrell, Steven (LBNL)

Presentation materials

Peer reviewing

Paper