Conveners
Track 2 - Online Computing: DAQ Frameworks
- Claire Antel (Universite de Geneve (CH))
- Felice Pantaleo (CERN)
Track 2 - Online Computing: Online Reconstruction
- Ruben Shahoyan (CERN)
- Felice Pantaleo (CERN)
Track 2 - Online Computing: DQ, Monitoring & Control
- Claire Antel (Universite de Geneve (CH))
- Ruben Shahoyan (CERN)
Track 2 - Online Computing: Accelerated Computing & Artificial Intelligence
- Satoru Yamada (KEK)
- Felice Pantaleo (CERN)
Track 2 - Online Computing: Network & Infrastructure
- Claire Antel (Universite de Geneve (CH))
- Satoru Yamada (KEK)
Track 2 - Online Computing: Network & Infrastructure (Part 2)
- Claire Antel (Universite de Geneve (CH))
- Ruben Shahoyan (CERN)
Track 2 - Online Computing: Readout & Data Processing
- Ruben Shahoyan (CERN)
- Satoru Yamada (KEK)
The LHCb experiment is one of the four large experiments on the LHC at CERN. This forward spectrometer is designed to investigate differences between matter and antimatter by studying beauty and charm Physics. The detector and the entire DAQ chain have been upgraded, to profit from the higher luminosity delivered by the particle accelerator during Run3. The new DAQ system introduces a...
ALICE (A Large Ion Collider Experiment) has undertaken a major upgrade during the LHC Long Shutdown 2. The increase in the detector data rates led to a hundredfold increase in the input raw data, up to 3.5 TB/s. To cope with it, a new common Online and Offline computing system, called O2, has been developed and put in production.
The O2/FLP system, successor of the ALICE DAQ system,...
Athena is the software framework used in the ATLAS experiment throughout the data processing path, from the software trigger system through offline event reconstruction to physics analysis. For Run 3 data taking (which started in 2022) the framework has been reimplemented into a multi-threaded framework. In addition to having to be remodelled to work in this new framework, the ATLAS High Level...
The INDRA-ASTRA project is part of the ongoing R&D on streaming readout and AI/ML at Jefferson Lab. In the interdisciplinary project, nuclear physicists and data scientists work towards a prototype for an autonomous, responsive detector system as a first step towards a fully autonomous experiment. In our presentation, we will present our method for autonomous calibration of DIS experiments...
The ATLAS experiment at CERN is constructing upgraded system
for the "High Luminosity LHC", with collisions due to start in
2029. In order to deliver an order of magnitude more data than
previous LHC runs, 14 TeV protons will collide with an instantaneous
luminosity of up to 7.5 x 10e34 cm^-2s^-1, resulting in much higher pileup and
data rates than the current experiment was designed to...
The fast algorithms for data reconstruction and analysis of the FLES (First Level Event Selection) package of the CBM (FAIR/GSI) experiment were successfully adapted to work on the High Level Trigger (HLT) of the STAR (BNL) experiment online. For this purpose, a so-called express data stream was created on the HLT, which enabled full processing and analysis of the experimental data in real...
The CMS collaboration has chosen a novel high granularity calorimeter (HGCAL) for the endcap regions as part of its planned upgrade for the high luminosity LHC. The calorimeter will have fine segmentation in both the transverse and longitudinal directions and will be the first such calorimeter specifically optimised for particle flow reconstruction to operate at a colliding-beam experiment....
Fast, efficient and accurate triggers are a critical requirement for modern high energy physics experiments given the increasingly large quantities of data that they produce. The CEBAF Large Acceptance Spectrometer (CLAS12) employs a highly efficient Level 3 electron trigger to filter the amount of data recorded by requiring at least one electron in each event, at the cost of a low purity in...
Long-lived particles (LLPs) are very challenging to search for with current detectors and computing requirements, due to their very displaced vertices. This study evaluates the ability of the trigger algorithms used in the Large Hadron Collider beauty (LHCb) experiment to detect long-lived particles and attempts to adapt them to enhance the sensitivity of this experiment to undiscovered...
The Phase-2 Upgrade of the CMS Level-1 Trigger will reconstruct particles using the Particle Flow algorithm, connecting information from the tracker, muon, and calorimeter detectors, and enabling fine-grained reconstruction of high level physics objects like jets. We have developed a jet reconstruction algorithm using a cone centred on an energetic seed from these Particle Flow candidates. The...
The CMS experiment has greatly benefited from the utilization of the particle-flow (PF) algorithm for the offline reconstruction of the data. The Phase II upgrade of the CMS detector for the High Luminosity upgrade of the LHC (HL-LHC) includes the introduction of tracking in the Level-1 trigger, thus offering the possibility of developing a simplified PF algorithm in the Level-1 trigger. We...
The current and future programs for accelerator-based neutrino imaging detectors feature the use of Liquid Argon Time Projection Chambers (LArTPC) as the fundamental detection technology. These detectors combine high-resolution imaging and precision calorimetry to enable the study of neutrino interactions with unparalleled capabilities. However, the volume of data from LArTPCs will exceed 25...
The ATLAS jet trigger is instrumental in selecting events both for Standard Model measurements and Beyond the Standard Model physics searches. Non-standard triggering strategies, such as saving only a small fraction of trigger objects for each event, avoids bandwidth limitations and increases sensitivity to low-mass and low-momentum objects. These events are used by Trigger Level Analyses,...
The LHCb experiment started taking data with an upgraded detector in the Run 3 of the LHC, reading at 30 MHz full detector data with a software-only trigger system. In this context, live data monitoring is crucial to ensure that the quality of the recorded data is optimal. Data from the experiment control system, as well as raw detector data and the output of the software trigger, is used as...
The CMS Online Monitoring System (OMS) aggregates and integrates different sources of information into a central place and allows users to view, compare and correlate information. It displays real-time and historical information.
The tool is heavily used by run coordinators, trigger experts and shift crews, to achieve optimal trigger and efficient data taking. It provides aggregated...
Hydra is a system which utilizes computer vision to monitor data quality in near real time. Currently, it is deployed in all of Jefferson Lab’s experimental halls and lightens the load on shift takers by autonomously monitoring diagnostic plots in near real time. Hydra is constructed from “off-the-shelf” technologies and is backed up by a full MySQL database. To aid with both labeling and...
ALICE (A Large Ion Collider Experiment) has undertaken a major upgrade during the Long Shutdown 2. The increase in the detector data rates, and in particular the continuous readout of the TPC, led to a hundredfold increase in the input raw data, up to 3.5 TB/s. To cope with it, a new common Online and Offline computing system, called O2, has been developed and put in production.
The online...
One critical step on the path from data taking to physics analysis is calibration. For many experiments this step is both time consuming and computationally expensive. The AI Experimental Calibration and Control project seeks to address these issues, starting first with the GlueX Central Drift Chamber (CDC). We demonstrate the ability of a Gaussian Process to estimate the gain correction...
The Large Hadron Collider (LHC) will be upgraded to High-luminosity LHC, increasing the number of simultaneous proton-proton collisions (pile-up, PU) by several-folds. The harsher PU conditions lead to exponentially increasing combinatorics in charged-particle tracking, placing a large demand on the computing resources. The projection on required computing resources exceeds the computing...
The LHCb experiment is currently taking data with a completely renewed DAQ system, capable for the first time of performing a full real-time reconstruction of all collision events occurring at LHC point 8.
The Collaboration is now pursuing a further upgrade (LHCb "Upgrade-II"), to enable the experiment to retain the same capability at luminosities an order of magnitude larger than the maximum...
The High-Luminosity LHC (HL-LHC) will provide an order of magnitude increase in integrated luminosity and enhance the discovery reach for new phenomena. The increased pile-up foreseen during the HL-LHC necessitates major upgrades to the ATLAS detector and trigger. The Phase-II trigger will consist of two levels, a hardware-based Level-0 trigger and an Event Filter (EF) with tracking...
During the LHC Run 3, the significant upgrades on many detectors and a brand new reconstruction software allows the ALICE experiment to record Pb-Pb collisions at an interaction rate of 50 kHz in a trigger-less continuous readout mode.
The key to process the 1TB/s peak data rate in ALICE is the usage of GPUs. There are two main data processing phases: the synchronous phase, where the TPC...
The LHCb experiment has recently started a new period of data taking after a major upgrade in both software and hardware. One of the biggest challenges has been the migration of the first part of the trigger system (HLT1) into a parallel GPU architecture framework called Allen, which performs a partial reconstruction of most of the LHCb sub-detectors. In Allen, the reconstruction of the...
The sensitivity of modern HEP experiments to New Physics (NP) is limited by the hardware-level triggers used to select data online, resulting in a bias in the data collected. The deployment of efficient data acquisition systems integrated with online processing pipelines is instrumental to increase the experiments' sensitivity to the discovery of any anomaly or possible signal of NP. In...
The CMS experiment at CERN incorporates one of the highest throughput Data Acquisition (DAQ) systems in High-Energy Physics. Its network throughput will further increase by over an order of magnitude at the High-Luminosity LHC.
The current Run 3 CMS Event Builder receives all the fragments of Level-1 trigger accepted events from the front-end electronics (740 data streams from the CMS...
The ATLAS experiment Data Acquisition (DAQ) system will be extensively upgraded to fully exploit the High-Luminosity LHC (HL-LHC) upgrade, allowing it to record data at unprecedented rates. The detector will be read out at 1 MHz generating over 5 TB/s of data. This design poses significant challenges for the Ethernet-based network as it will be required to transport 20 times more data than...
To achieve better computational efficiency and exploit a wider range of computing resources, the CMS software framework (CMSSW) has been extended to offload part of the physics reconstruction to NVIDIA GPUs, while the support for AMD and Intel GPUs is under development. To avoid the need to write, validate and maintain a separate implementation of the reconstruction algorithms for each...
The original HLT framework used in the Belle II experiment was formaly upgraded replacing the old IPC based ring buffer with the ZeroMQ data transport to overcome the unexpected IPC locking problem. The new framework has been working stably in the beam run so far, but it lacks the capability to recover the processing fault without stopping the on-going data taking. In addition, the...
The Deep Underground Neutrino Experiment (DUNE) is a next generation long-baseline neutrino experiment based in the USA which is expected to start taking data in 2029. DUNE aims to precisely measure neutrino oscillation parameters by detecting neutrinos from the LBNF beamline (Fermilab) at the Far Detector, 1300 kilometres away, in South Dakota. The Far Detector will consist of four cryogenic...
Large-scale research facilities are becoming prevalent in the modern scientific landscape. One of these facilities' primary responsibilities is to make sure that users can process and analysis measurement data for publication. To allow for barrier-less access to those highly complex experiments, almost all beamlines require fast feedback capable of manipulating and visualizing data online to...
NA62 is a K meson physics experiment based on a decay-in-flight technique and whose Trigger and Data Acquisition system (TDAQ) is multi-level and network based. A reorganization of both the beam line and the detector is foreseen in the next years to complete and extend the physics reach of NA62. One of the challenging aspects of this upgrade is a significant increase (x4) in the event rate...
The backend of the Belle II data acquisition system consists of a high-level trigger system, online storage, and an express-reconstruction system for online data processing. The high-level trigger system was updated to use the ZeroMQ networking library from the old ring buffer and TCP/IP socket, and the new system is successfully operated. However, the online storage and express-reconstruction...
The High-Luminosity LHC will open an unprecedented window on the weak-scale nature of the universe, providing high-precision measurements of the standard model as well as searches for new physics beyond the standard model. Such precision measurements and searches require information-rich datasets with a statistical power that matches the high-luminosity provided by the Phase-2 upgrade of the...
High Energy Photon Source(HEPS)is expected to generate a huge amount of data, which puts extreme pressure on data I/O in computing tasks. Meanwhile, inefficient data I/O significantly affect computing performance.
Firstly, the data reading mode and limitations of computing resources are taken into account,and we propose a method for automatic tuning of storage parameters, such as data block...
The CMS data acquisition (DAQ) is implemented as a service-oriented architecture where DAQ applications, as well as general applications such as monitoring and error reporting, are run as self-contained services. The task of deployment and operation of services is achieved by using several heterogeneous facilities, custom configuration data and scripts in several languages. Deployment of all...
The CMS experiment data acquisition (DAQ) collects data for events accepted by the Level-1 trigger from the different detector systems and assembles them in an event builder prior to making them available for further selection in the High Level Trigger, and finally storing the selected ones for offline analysis. In addition to the central DAQ providing global acquisition functionality, several...
The Liquid Argon Calorimeters are employed by ATLAS for all electromagnetic calorimetry in the pseudo-rapidity region |η| < 3.2, and for hadronic and forward calorimetry in the region from |η| = 1.5 to |η| = 4.9. They also provide inputs to the first level of the ATLAS trigger. After successful period of data taking during the LHC Run-2 between 2015 and 2018 the ATLAS detector entered into the...
ALICE (A Large Ion Collider Experiment) is a heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). During the second long shut-down of the LHC, the ALICE detector was upgraded to cope with an interaction rate of 50 kHz in Pb-Pb collisions, producing in the online computing system (O2) a sustained input...
A new era of hadron collisions will start around 2029 with the High-Luminosity LHC which will allow to collect ten times more data than what has been collected during 10 years of operation at LHC. This will be achieved by higher instantaneous luminosity at the price of higher number of collisions per bunch crossing.
In order to withstand the high expected radiation doses and the harsher...
Over the next decade, the ATLAS detector will be required to operate in an increasingly harsh collision environment. To maintain physics performance, the detector will undergo a series of upgrades during major shutdowns. A key goal of these upgrades is to improve the capacity and flexibility of the detector readout system. To this end, the Front-End Link eXchange (FELIX) system was developed...
The volume and complexity of data produced at HEP and NP research facilities have grown exponentially. There is an increased demand for new approaches to process data in near-real-time. In addition, existing data processing architectures need to be reevaluated to see if they are adaptable to new technologies. A unified programming model for event processing and distribution that can exploit...
To improve the potential for discoveries at the LHC, a significant luminosity increase of the accelerator (HL-LHC) is foreseen in the late 2020s, to achieve a peak luminosity of 7.5x10^34 cm^-2 s^-1, in the ultimate performance scenario. HL-LHC is expected to run with a bunch-crossing separation of 25 ns and a maximum average of 200 events (pile-up) per crossing. To maintain or even improve...