Conveners
Track 3 - Offline Computing: Reconstruction
- Marilena Bandieramonte (University of Pittsburgh)
- Norraphat Srpimanobhas (Chulalongkorn University)
Track 3 - Offline Computing: Physics performance (part I)
- Norraphat Srpimanobhas (Chulalongkorn University)
- Tingjun Yang (Fermilab)
Track 3 - Offline Computing: Physics performance (part 2)
- Norraphat Srpimanobhas (Chulalongkorn University)
- Tingjun Yang (Fermilab)
Track 3 - Offline Computing: Data preparation (part I)
- Marilena Bandieramonte (University of Pittsburgh)
- Norraphat Srpimanobhas (Chulalongkorn University)
Track 3 - Offline Computing: Simulation (part 2)
- Marilena Bandieramonte (University of Pittsburgh)
- Tingjun Yang (Fermilab)
Track 3 - Offline Computing: Simulation (part 3)
- Marilena Bandieramonte (University of Pittsburgh)
- SOFIA VALLECORSA (CERN)
Track 3 - Offline Computing: Data Preparation (part 2)
- Xavier Espinal (CERN)
- Tingjun Yang (Fermilab)
The reconstruction of particle trajectories is a key challenge of particle physics experiments, as it directly impacts particle identification and physics performances while also representing one of the main CPU consumers of many high energy physics experiments. As the luminosity of particle collider increases, this reconstruction will become more challenging and resource intensive. New...
MkFit is an implementation of the Kalman filter-based track reconstruction algorithm that exploits both thread- and data-level parallelism. In the past few years the project transitioned from the R&D phase to deployment in the Run-3 offline workflow of the CMS experiment. The CMS tracking performs a series of iterations, targeting reconstruction of tracks of increasing difficulty after...
Despite recent advances in optimising the track reconstruction problem for high particle multiplicities in high energy physics experiments, it remains one of the most demanding reconstruction steps in regards to complexity and computing ressources. Several attemps have been made in the past to deploy suitable algorithms for track reconstruction on hardware accelerators, often by tailoring the...
The high luminosity expected from the LHC during the Run 3 and, especially, the HL-LHC of data taking introduces significant challenges in the CMS event reconstruction chain. The additional computational resources needed to treat this increased quantity of data surpass the expected increase in processing power for the next years. In order to fit the projected resource envelope, CMS is...
Building on the pioneering work of the HEP.TrkX project [1], Exa.TrkX developed geometric learning tracking pipelines that include metric learning and graph networks. These end-to-end pipelines capture the relationships between spacepoint measurements belonging to a particle track. We tested the pipelines on simulated data from HL-LHC tracking detectors [2,5], Liquid Argon TPCs for neutrino...
The production of simulated datasets for use by physics analyses consumes a large fraction of ATLAS computing resources, a problem that will only get worse as increases in the instantaneous luminosity provided by the LHC lead to more collisions per bunch crossing (pile-up). One of the more resource-intensive steps in the Monte Carlo production is reconstructing the tracks in the ATLAS Inner...
The AGATA project (1) aims at building a 4pi gamma-ray spectrometer consisting of 180 germanium crystals, each crystal being divided into 36 segments. Each gamma ray produces an electrical signal within several neighbouring segments, which is compared with a data base of reference signals, enabling to locate the interaction. This step is called Pulse-Shape Analysis (PSA).
In the execution...
Track reconstruction, also known as tracking, is a vital part of the HEP event reconstruction process, and one of the largest consumers of computing resources. The upcoming HL-LHC upgrade will exacerbate the need for efficient software able to make good use of the underlying heterogeneous hardware. However, this evolution should not imply the production of code unintelligible to most of its...
The LHCb software stack is developed in C++ and uses the Gaudi framework for event processing and DD4hep for the detector description. Numerical computations are done either directly in the C++ code or by an evaluator used to process the expressions embedded in the XML describing the detector geometry.
The current system relies on conventions for the physical units used (identical as what...
Applying graph-based techniques, and graph neural networks (GNNs) in particular, has been shown to be a promising solution to the high-occupancy track reconstruction problems posed by the upcoming HL-LHC era. Simulations of this environment present noisy, heterogeneous and ambiguous data, which previous GNN-based algorithms for ATLAS ITk track reconstruction could not handle natively. We...
The Belle II experiment has been accumulating data since 2019 at the SuperKEKB $e^+e^-$ accelerator in Tsukuba, Japan. The accelerator operates at the $\Upsilon(4S)$ resonance and is an excellent laboratory for precision flavor measurements and dark sector searches. The accumulated data are promptly reconstructed and calibrated at a dedicated calibration center in an automated process based on...
Development of the EIC project detector "ePIC" is now well underway and this includes the "single software stack" used for simulation and reconstruction. The stack combines several non-experiment-specific packages including ACTS, DD4hep, JANA2, and PODIO. The software stack aims to be forward looking in the era of AI/ML and heterogeneous hardware. A formal decision making process was...
The EPIC collaboration at the Electron-Ion Collider recently laid the groundwork for its software infrastructure. Large parts of the software ecosystem for EPIC mirror the setup from the Key4hep project, for example DD4hep for geometry description, and EDM4hep/PODIO for the data model. However, other parts of the EPIC software ecosystem diverge from Key4hep, for example for the event...
The reconstruction of charged particles’ trajectories is one of the most complex and CPU-consuming event processing chains in high energy physics (HEP) experiments. Meanwhile, the precision of track reconstruction has direct and significant impact on vertex reconstruction, physics flavour tagging and particle identfication, and eventually on physics precision, in particular for HEP experiments...
ACTS is an experiment independent toolkit for track reconstruction, which is designed from the ground up for thread-safety and high performance. It is built to accommodate different experiment deployment scenarios, and also serves as community platform for research and development of new approaches and algorithms.
The Event Data Model (EDM) is a critical piece of the tracking library that...
For Run 3, ATLAS redesigned its offline software, Athena, so that the
main workflows run completely multithreaded. The resulting substantial
reduction in the overall memory requirements allows for better use
of machines with many cores. This talk will discuss the performance
achieved by the multithreaded reconstruction as well as the process
of migrating the large ATLAS code base and...
During the long shutdown between LHC Run 2 and 3, a reprocessing of 2017 and 2018 CMS data with higher granularity data quality monitoring (DQM) harvesting was done. The time granularity of DQM histograms in this dataset is increased by 3 orders of magnitude. In anticipation of deploying this higher granularity DQM harvesting in the ongoing Run 3 data taking, this dataset is used to study the...
The CMS Tier-0 service is responsible for the prompt processing and distribution of the data collected by the CMS Experiment. A number of upgrades were implemented during the long shutdown of the Large Hadron Collider, which improved the performance and reliability of the service. We report our experience of the data taking during Run-3 detector commissioning as well as performance of the...
(on behalf of the JUNO Collaboration)
Jiangmen Underground Neutrino Observatory (JUNO), under construction in southern China, is a multi-purpose neutrino experiment designed to determine the neutrino mass hierarchy and precisely measure oscillation parameters. Equipped with a 20-kton liquid scintillator central detector viewed by 17,612 20-inch and 25,6000 3-inch photomultiplier tubes, JUNO...
The former CMS Run 2 High Level Trigger (HLT) farm is one of the largest contributors to CMS compute resources, providing about 30k job slots for offline computing. The role of this farm has been evolving, from an opportunistic resource exploited during inter-fill periods in the LHC Run 2, to a nearly transparent extension of the CMS capacity at CERN during LS2 and into the LHC Run 3 started...
The Super Tau Charm Facility (STCF) proposed in China is a new-generation electron–positron collider with center-of-mass energies covering 2-7 GeV and a peak luminosity of 5*10^34 cm^-2s^-1. The offline software of STCF (OSCAR) is developed to support the offline data processing, including detector simulation, reconstruction, calibration as well as physics analysis. To meet STCF’s specific...
We summarize the status of Deep Underground Neutrino Experiment (DUNE) software and computing development. We describe plans for the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes in pursuit of the experiment's physics goals of precision measurements of neutrino oscillation parameters, detection of...
The Deep Underground Neutrino Experiment (DUNE) is a long-baseline experiment which aims to study neutrino oscillation and astroparticle physics. It will produce vast amounts of metadata, which describe the data coming from the read-out of the primary DUNE detectors. Various databases will make up the overall DB architecture for this metadata. ProtoDUNE at CERN is the largest existing...
EvtGen is a simulation generator specialized for decays of heavy hadrons. Since its early development in the 90’s, the generator has been extensively used and has become today an essential tool for heavy-flavour physics analyses. Throughout this time, its source code has remained mostly unchanged, except for additions of new decay models. In view of the upcoming boom of multi-threaded...
The upgrade of the Large Hadron Collider (LHC) is going well, during next decade we will face the ten-fold increase in experimental data. The application of state-of-the-art detectors and data acquisition systems requires high-performance simulation support, which even more demanding in case of heavy ion collisions. Our basic aim was to develop a Monte-Carlo simulation code for heavy ion...
In a context where the HEP community is striving to improve the software to cope with higher data throughput, detector simulation is adapting to benefit from new performance opportunities. Given the complexity of the particle transport modeling, new developments such as adapting to accelerator hardware represent a scalable R&D effort.
The AdePT and Celeritas projects have already...
Motivated by the need to have large Monte Carlo data statistics to be able to perform the physics analysis for the coming runs of HEP experiments, particularly for HL-LHC, there are a number of efforts exploring different avenues for speeding up particle transport simulation. In particular, one of the possibilities is to re-implement the simulation code to run efficiently on GPUs. This could...
Monte Carlo detector transport codes are one of the backbones in high-energy physics. They simulate the transport of a large variety of different particle types through complex detector geometries based on a multitude of physics models.
Those simulations are usually configured or tuned through large sets of parameters. Often, tuning the physics accuracy on the one hand and optimising the...
Geant4, the leading detector simulation toolkit used in High Energy Physics, employs a set of physics models to simulate interactions of particles with matter across a wide range of interaction energies. These models, especially the hadronic ones, rely largely on directly measured cross-sections and inclusive characteristics, and use physically motivated parameters. However, they generally aim...
The analysis category was introduced in Geant4 almost ten years ago (in 2014) with the aim to provide users with a lightweight analysis tool, available as part of the Geant4 installation without the need to link to an external analysis package. It helps capture statistical data in the form of histograms and n-tuples and store these in files in four various formats. It was already presented at...
For the new Geant4 series 11.X electromagnetic (EM) physics sub-libraries were revised and reorganized in view of requirements for simulation of Phase-2 LHC experiments. EM physics simulation software takes a significant part of CPU during massive production of Monte Carlo events for LHC experiments. We present recent evolution of Geant4 EM sub-libraries for simulation of gamma, electron, and...
The Circular Electron Positron Collider (CEPC) [1] is one of the future experiments aiming to study the Higgs boson’s properties precisely. For this purpose, excellent track reconstruction and particle identification (PID) performance are required. Such as the tracking efficiency should be around 100%, the momentum resolution should be less than 0.1%, and the Kaon and pion should have 2 sigma...
MoEDAL (the Monopole and Exotics Detector at the LHC) searches directly magnetic monopoles at the Interaction Point 8 of the Large Hadron Collider (LHC). As an upgrade of the experiment an addition, MAPP (MoEDAL Apparatus for Penetrating Particles) detector extends the physics reach by providing sensitivity to milli-charged and long-lived exotic particles. The MAPP detectors are scintillator...
FullSimLight is a lightweight, Geant4-based command line
simulation utility intended for studies of simulation performance. It
is part of the GeoModel toolkit (geomodel.web.cern.ch) which has been
stable for more than one year. The FullSimLight component
has recently undergone renewed development aimed at extending its
functionality. It has been endowed with a GUI for fast,...
In this contribution we report status of the CMS Geant4 simulation and the prospects for Run-3 and Phase-2.
Firstly, we report about our experience during the start of Run-3 with Geant4 10.7.2, the common software package DD4hep for geometry description, and VecGeom run time geometry library. In addition, FTFP_BERT_EMM Physics List and CMS configuration for tracking in magnetic field have...
For HEP event processing, data is typically stored in column-wise synchronized containers, such as most prominently ROOT’s TTree, which have been used for several decades to store by now over 1 exabyte. These containers can combine row-wise association capabilities needed by most HEP event processing frameworks (e.g. Athena for ATLAS) with column-wise storage, which typically results in better...
The increased footprint foreseen for Run-3 and HL-LHC data will soon expose
the limits of currently available storage and CPU resources. Data formats
are already optimized according to the processing chain for which they are
designed. ATLAS events are stored in ROOT-based reconstruction output files
called Analysis Object Data (AOD), which are then processed within the
derivation...
With the increased data volumes expected to be delivered by the HL-LHC, it becomes critical for the ATLAS experiment to maximize the utilization of available computing resources ranging from conventional GRID clusters to supercomputers and cloud computing platforms. To be able to run its data processing applications on these resources, the ATLAS software framework must be capable of...
Since March 2019 the Belle II detector has collected data from e+ e- collisions at the SuperKEKB collider. For Belle II analyses to be competitive it is crucial that calibration constants are calculated promptly so that the reconstructed datasets can be provided to analysts. A subset of calibration constants also benefit by being re-derived during yearly recalibration campaigns to give...
REve, the new generation of the ROOT event-display module, uses a web server-client model to guarantee exact data translation from the experiments' data analysis frameworks to users' browsers. Data is then displayed in various views, including high-precision 2D and 3D graphics views, currently driven by THREE.js rendering engine based on WebGL technology.
RenderCore, a computer graphics...
Particle tracking is among the most sophisticated and complex part of the full event reconstruction chain. A number of reconstruction algorithms work in a sequence to build these trajectories from detector hits. Each of these algorithms use many configuration parameters that need to be fine-tuned to properly account for the detector/experimental setup, the available CPU budget and the desired...