Conveners
Poster Session: Poster Session Tuesday
- Taylor Childers (Argonne National Laboratory)
Poster Session: Poster Session Thursday
- Taylor Childers (Argonne National Laboratory)
When a new long-term storage facility was needed at the Lancaster WLCG Tier-2 Site, an architecture was chosen involving CephFS as a failure-tolerant back-end volume, and load-balanced XRootD as an endpoint exposing the volume via the HTTPS/DAVS protocols increasingly favoured by the WLCG and other users. This allows operations to continue in the face of disc/node failures with minimal...
Timepix4 is a hybrid pixel detector readout ASIC developed by the Medipix4 Collaboration. It consists of a matrix of about 230 k pixels with 55 micron pitch, equipped with amplifier, discriminator and time-to-digital converter with 195 ps bin size that allows to measure time-of-arrival and time-over-threshold. It is equipped with two different types of links: slow control links, for the...
The Jiangmen Underground Neutrino Observatory (JUNO) experiment is designed to measure the neutrino mass ordering (NMO) using a 20-kton liquid scintillator (LS) detector. Besides the precise measurement of the reactor neutrinos oscillation spectrum, an atmospheric neutrino oscillation measurement in JUNO offers independent sensitivity for NMO, which can potentially increase JUNO’s total...
Random number generator is an important component of many scientific projects. Many of those projects are written using programming models (like OpenMP and SYCL) to target different architectures. However, some of the programming models do not provide a random number generator. In this talk we are going to introduce our random number generator wrapper. It is a header-only library that supports...
Delphes is a C++ framework to perform a fast multipurpose detector response simulation. The Circular Electron Positron Collider (CEPC) experiment runs fast simulation with a modified Delphes based on its own scientific objectives. The CEPC fast simulation with Delphes is a High Throughput Computing (HTC) application with small input and output files. Besides, to compile and run Delphes, only...
Slurm REST APIs are released since version 20.02. With those REST APIs one can interact with slurmctld and slurmdbd daemons in a RESTful way. As a result, job submission and cluster status query can be achieved with a web system. To take advantage of Slurm REST APIs, a web workbench system is developed for the Slurm cluster at IHEP.
The workbench system consists with four subsystems:...
Documentation on all things computing is vital for an evolving collaboration of scientists, technicians, and students. Using the MediaWiki software as the framework, a searchable knowledge base is provided via a secure web interface to the large cadre of colleagues working on the DUNE far and near detectors. Organization information, links relevant to working groups, operations and...
WLCG relies on the network as a critical part of its infrastructure and therefore needs to guarantee effective network usage and prompt detection and resolution of any network issues, including connection failures, congestion and traffic routing. The OSG Networking Area, in partnership with the WLCG Throughput working group, has created a monitoring infrastructure that gathers metrics from the...
Particle identification is an important ingredient to particle physics experiments. Distinguishing the charged hadrons (pions, kaons, protons and their antiparticles) is often crucial, in particular for hadronic decays which could be studied with an efficient particle identification to obtain a desirable signal-to-background ratio. An optimal performance of particle identification in a large...
[Nordugrid ARC][1] is widely used today in the Worldwide LHC Computing Grid as one of the two recommended middlewares connecting the grid sites. It has served the WLCG and the High Energy Physics Communities very well since its birth in 2002. Now though, ARC aims to reach more communities outside of HEP like for instance in the fields of Bioinformatics, Astrophysics and Climate Research to...
The ATLAS software tutorial is a centrally organized suite of educational materials that helps to prepare newcomers for work in ATLAS. The broad objective is to familiarize participants with the basic skills needed to accomplish data analysis tasks, with a strong focus on software tools. In this talk, we will outline the recent changes to the ATLAS software tutorial to follow a project-based...
Modern large distributed computing systems produce large amounts of monitoring data. In order for these systems to operate smoothly, under-performing or failing components have to be identified quickly, and preferably automatically, enabling the system managers to react accordingly.
In this contribution, we analyze job and data transfer data collected in the running of the LHC computing...
Imperial College London hosts a large Tier-2 WLCG grid site based around a HTCondor batch system; additionally it provides cloud computing facilities using Openstack to non-WLCG activities. These cloud resources are open to opportunistic usage, provided the impact on the primary cloud users remains low.
In common with most Tier 2 sites we see constant job pressure from the WLCG VOs, while the...
Many large-scale physics experiments, such as ATLAS at the Large Hadron Collider, Deep Underground Neutrino Experiment and sPHENIX at the Realistic Heavy Ion Collider, rely on accurate simulations to inform data analysis and derive scientific results. Their inevitable inaccuracies may be detected and corrected using heuristics in a conventional analysis workflow.
However, residual errors...
Software development projects at Edinburgh identified a desire to build and manage our own monitoring platform. This better allows us to support the developing and varied physics and computing interests of our Experimental Particle Physics group. This production platform enables oversight of international experimental data management, local software development projects and active monitoring...
The ATLAS Spanish Tier-1 and Tier-2s have more than 18 years of experience in the deployment and development of LHC computing components and their successful operations. The sites are actively participating in, and even coordinating, R&D computing activities in the LHC Run3 and developing the computing models needed in HL-LHC period.
In this contribution, we present details on the...
High energy physics experiments heavily rely on the results of MC simulation of data used to extract physics results. However, the detailed simulation often requires tremendous amount of computation resources.
Using Generative Adversarial Networks and other deep learning generative techniques can drastically speed up the computationally heavy simulations like a simulation of the calorimeter...
Most common CPU architectures provide simultaneous multithreading ( SMT). Thereby, the operating system sees, per physical core, two logical cores and can schedule two processes to one physical CPU core. This overbooking of physical cores enabled a better usage of parallel pipelines and doubled components within a CPU core. On systems with several applications running in parallel, such as...
Until now the grid storage at Melbourne was provided by the DPM storage system which is now reaching the end of support, as well as the end of the disk lifetimes. Continuing to provide grid storage for the ATLAS and Belle II experiments requires that we move to a new solution; one that can be supported long term with minimal manpower by taking advantage of existing resources at Melbourne. Over...
Modern Nuclear Physics experimental setups run experiments with higher beam intensity resulting in increased noise in detector components
used for particle track reconstruction. Increased uncorrelated signals (noise) result in decreased particle reconstruction efficiency.
In this work, we investigate the usage of Machine Learning, specifically Convolutional Neural Network Auto-Encoders...
The aim of the LHCb Upgrade II at the LHC is to operate at a luminosity of 1.5 x 1034 cm-2 s-1 to collect a data set of 300 fb-1. This will require a substantial modification of the current LHCb ECAL due to high radiation doses in the central region and increased particle densities.
Advanced detector R&D for both new and ongoing experiments in HEP requires performing computationally intensive...
ML/DL techniques have shown their power in the improvement of several studies and tasks in HEP, in particular, especially in physics analysis. Our approach has been to take a number of the ML/DL tools provided by several Open Source platforms applying them to several classification problems, for instance, to the ttbar resonance extraction in LHC experiment.
A comparison has been made...
In High Energy Physics, detailed and time-consuming simulations are used for particle interactions with detectors. To bypass these simulations with a generative model, the generation of large point clouds in a short time is required, while the complex dependencies between the particles must be correctly modeled. Particle showers are inherently tree-based processes, as each particle is produced...
With the extended usage of machine learning models, more and more complex algorithms are being studied. On the one hand, the development and optimisation processes become more challenging, on the other hand studies about models generalisation and re-usability become interesting. In this context, efficient, flexible ways to track continuous changes during development, as well as relevant...
ATLAS Metadata Interface (AMI) is a generic ecosystem for metadata aggregation, transformation and cataloging. Benefiting from more than 20 years of feedback in the LHC context, the second major version was released in 2018. This poster describes how a renewed architecture and integration with modern technologies ease the usage and deployment of a complete AMI stack. It describes how to deploy...
Differentiable Programming could open even more doors in HEP analysis and computing to Artificial Intelligence/Machine Learning. Current common uses of AI/ML in HEP are deep learning networks – providing us with sophisticated ways of separating signal from background, classifying physics, etc. This is only one part of a full analysis – normally skims are made to reduce dataset sizes by...
Liquid argon time projection chambers (TPCs) are widely used in particle detection. High quality physics simulators have been developed for such detectors in a variety of experiments, and the resulting simulations are used to aid in reconstruction and analysis of collected data. However, the degree to which these simulations are reflective of real data is limited by the knowledge of the...
High-precision modeling of systems is one of the main areas of industrial data analysis today. Models of the systems, their digital twins, are used to predict their behavior under various conditions. We have developed a digital twin of a data storage system using generative models of machine learning. The system consists of several types of components: HDD and SSD disks, disk pools with...
As a conventional ptychography iteration reconstruction method, Difference Map (DM) is suitable for computing large datasets of X-ray diffraction patterns on multi-GPU heterogeneous system at HEPS (High Energy Photon Source). However, it is based on the fact that the GPU memory is big enough to handle CUDA FFT/IFFT to all the patterns to GPU from RAM. Meanwhile, the intermediate data during...
Providing high performance and reliable tape storage system is GridKa's top priority. The GridKa tape storage system was recently migrated from IBM SP to HighPerformance Storage System (HPSS) for LHC and non-LHC HEP experiments. These are two different tape backends and each has its own design and specifics that need to be studied and understood very deeply. Taking into account the features...
We present a case for ARM chips as an alternative to standard x86 CPUs at WLCG sites, as we observed better performance and lower power consumption in a number of benchmarks based on actual HEP workloads, from ATLAS simulation and reconstruction tasks to the most recent HEP-Score containerised jobs.
To support our case, we present novel measurements on the performance and energy...
Starting this year, the upgraded LHCb detector is collecting data with a pure software trigger. In its first stage, reducing the rate from 30MHz to about 1MHz, GPUs are used to reconstruct and trigger on B and D meson topologies and high-pT objects in the event. In its second stage, a CPU farm is used to reconstruct the full event and perform candidate selections, which are persisted for...
Increases in data volumes are forcing high-energy and nuclear physics experiments to store more frequently accessed data on tape. Extracting the maximum performance from tape drives is critical to make this viable from a data availability and system cost standpoint. The nature of data ingest and retrieval in an experimental physics environment make achieving high access performance difficult...
XRootD servers are commonplace to many parts of HEP data management and are a key component to data access and management strategies in both the WLCG and OSG. Deployments of XRootD instances across the UK have demonstrated the versatility and expandability of this data management software. As we become more reliant on these services, there is a requirement to collect low-level metrics to...
In experiments with a noble liquid time-projection chamber there are arrays of photosensors positioned to allow for inference of the locations of interactions within the detector. If there is a gap in data left by a broken or saturated photosensor, inference of the position is less precise and less accurate. As it is not practical to repair or replace photosensors once the experiment has...
TFPWA is a general framework for partial wave analysis developed based on TensorFlow2. Partial wave analysis is a powerful method to determinate the 4-momentum distribution of multi-body decay final states and to extract the interested internal information. Base on a simple topological representation, TF-PWA can deal with most of the processes of partial wave analysis automatically and...
INFN-CNAF is one of the Worldwide LHC Computing Grid (WLCG) Tier-1 data centers, providing computing, networking and storage resources to a wide variety of scientific collaborations, not limited to the four LHC experiments. The INFN-CNAF data center will move to a new location next year. At the same time, the requirements from our experiments and users are becoming increasingly challenging and...
The compact highly granular time-of-flight neutron detector is designed for the fixed target BM@N experiment at Nuclotron (JINR). This detector is aimed to measure anisotropy of azimuthal neutron flows, that are sensitive to the equation of state of dense nuclear matter. Graph neural network (GNN) method is proposed to reconstruct fast neutrons with energies up to few GeV produced in...
Decay products of long-lived particles are an important signature in dark sector searches in collider experiments. The current Belle II tracking algorithm is optimized for tracks originating from the interaction point at the cost of a lower track finding efficiency for displaced tracks. This is especially the case for low-momentum displaced tracks that are crucial for dark sector searches in...
Multi-messenger astrophysics provides valuable insights into the properties of the physical Universe. These insights arise from the complementary information carried by photons, gravitational waves, neutrinos and cosmic rays about individual cosmic sources and source populations.
When a gravitational wave (GW) candidate is identified by the Ligo, Virgo and Kagra (LVK) observatory network, an...
Many data-intensive experiments, need to track large amounts of data, large code bases to process the data, and configuration data (or metadata). All of these datasets have unique requirements for version control. Furthermore, these datasets must have rules and requirements about how they can interact with the code. For this it is crucial to make the storage of data, the metadata in particular...
The ATLAS EventIndex is the global catalogue of all ATLAS real and simulated events. During the LHC long shutdown between Run 2 (2015-2018) and Run 3 (2022-2025) its components were substantially revised, and a new system has been deployed for the start of Run 3 in Spring 2022. The new core storage system is based on HBase tables with a Phoenix interface. It allows faster data ingestion rates...
When dealing with benchmarking, result collection and sharing is as essential as the production of the data itself. A central source of information allows for data comparison, visualization, and analysis, which the community both contributes to and profits from.
While this is the case in other fields, in the High Energy Physics (HEP) benchmarking community both script and result sharing...
There exist emerging interests for e+e- Higgs factories in the context of FCCee feasibility study in Europe, ILC in Japan, and new proposals (C3, HELEN) at US. As an original developer of a widely-used software package of jet analysis including flavor tagging for e+e- linear collider studies, LCFIPlus (published at NIMA and arXiv:1506.08371), we are now developing DNN-based flavor tagging...
The computing resource needs of LHC experiments, such as CMS, are expected to continue growing significantly over the next decade, during the Run 3 and especially the HL-LHC era. Additionally, the landscape of available resources will evolve, as HPC (and Cloud) resources will provide a comparable, or even dominant, fraction of the total capacity, in contrast with the current situation,...
The IceCube Neutrino Observatory is a cubic kilometer neutrino telescope located at the geographic South Pole. Understanding detector systematic effects is a continuous process. This requires the Monte Carlo simulation to be updated periodically to quantify potential changes and improvements in science results with more detailed modeling of the systematic effects. IceCube’s largest systematic...
The IceCube Neutrino Observatory is a cubic kilometer neutrino telescope located at the geographic South Pole. IceCube recently completed a multi-year user management migration from a manual LDAP-based process to an automated and integrated system built on Keycloak. A custom interface allows lead scientists to register and edit their institution’s user accounts, including group memberships...
The complexity of the ATLAS detector and its infrastructure require an excellent understanding of the interdependencies between the components when identifying points of failure (POF). The ATLAS Technical Coordination Expert System features a graph-based inference engine that identifies a POF provided a list of faulty elements or Detector Safety System (DSS) alarms. However, the current...
DIRAC is a widely used “framework for distributed computing”. It works by building a layer between the users and the resources offering a common interface to a number of heterogeneous providers. DIRAC, like many other workload management systems, uses pilot jobs to check and configure the worker-node environment before fetching a user payload. The pilot also records a number of different...
Having a long tradition in state-of-the-art distributed IT technologies, from the first small clusters to Grid and Cloud-based computing, in the last couple of years INFN made available to its users INFN Cloud: an easy to use, distributed, user-centric cloud infrastructure and services portfolio targeted to scientific communities.
Given the distributed nature of the infrastructure, the...
This contribution introduces the job optimizer service for the next-
generation ALICE grid middleware, JAliEn (Java Alice Environment).
It is a continuous service running on central machines and is essentially
responsible for splitting jobs into subjobs, to then be distributed and
executed on the ALICE grid. There are several ways of creating subjobs
based on various strategies relevant...
With ever increasing data rates in modern physics experiments there is a need to
partially analyze incoming data real time. Increasing data volumes create a need
to filter data to write out only events that are useful for physics analysis.
In this work we present a Level-3 trigger developed using Artificial Intelligence
to select electron trigger events at data acquisition level....
The Super Tau-Charm Facility (STCF) is a future electron-positron collider proposed in China. It has a peak luminosity of above $0.5×10^{35}$ $cm^{-2}s^{-1}$ and center-of-mass energy ranging from 2 to 7 GeV. A time-of-flight detector based on detection of internally reflected Cherenkov light (DTOF), is proposed for the endcap particle identification (PID) at the STCF. In this contribution, we...
In this work, we report the status of a neural network regression model trained to extract new physics (NP) parameters in Monte Carlo (MC) data. We utilize a new EvtGen NP MC generator to generate $B \rightarrow K^{*} \ell^{+} \ell^{-}$ events according to the deviation of the Wilson Coefficient $C_{9}$ from its SM value, $\delta C_{9}$. We train a convolutional neural network regression...
ROOT RNTuple I/O subsystem has been designed to address performance bottlenecks and shortcomings of ROOT's current state of the art TTree I/O subsystem. RNTuple provides a backwards-incompatible redesign of the TTree binary format and API that evolves the ROOT event data I/O for the challenges of the upcoming decades. It has been engineered for high-performance on modern storage hardware, a...
The Mu2e experiment will search for the neutrino-less conversion of muons to electrons in muonic aluminum. The Mu2e tracker detector measures the momentum of signal and background particles traveling down the beamline. One major background source is the $\mu\rightarrow e \bar{\nu} \nu$ decay of muons in orbit (DIO) process. Due to resolution and algorithm errors during reconstruction, these...
SHiP (Search for Hidden Particles) and the associated CERN SPS Beam Dump Facility is a new general-purpose experiment proposed at the SPS to search for "hidden" particles which are predicted by many recently elaborated models of Hidden Sector of the Standard Model. The experiment searches for very weakly interacting long-lived particles and crucially depends on effective background...
Particle identification (PID) is one of the mostly commonly used tools for the physics analysis in collider physics experiments. To achieve good PID performance, information given by multiple sub-detectors are usually combined. This is in particular necessary for the discrimination of charged particles that have close masses (e.g. muon and pion). However, due to the intrinsic correlations...
Reweighting Monte Carlo (MC) events for alternate benchmarks of beyond standard model (BSM) physics is an effective way to reduce the computational cost of physics searches. However, applicability of reweighting is often constrained by technical limitations. We demonstrate how pre-trained neural networks can be used to obtain fast and reliable reweighting without relying on the full MC...
dCache at BNL has been in production for almost two decades. For years, dCache used the default driver included with the dCache software to interface with HPSS tape storage systems. Due to the synchronous nature of this approach and the high resource demands resulting from periodic script invocations, scalability was significantly limited. During the WLCG tape challenges, bottlenecks in dCache...
Particle accelerators, such as the Spallation Neutron Source (SNS), require high beam availability in order to maximize scientific discovery. Recently, researchers have made significant progress utilizing machine learning (ML) models to identify anomalies, prevent damage, reduce beam loss, and tune accelerator parameters in real time. In this work, we study the use of uncertainty aware...
During the long shutdown prior to LHC Run 3, the CMS reconstruction software (CMSSW) was upgraded to offload 40% of the High Level Trigger (HLT) processing to GPUs. This upgrade accelerated the reconstruction algorithms and improved the efficiency of the HLT farm, however it introduced new parameters to the system that had to be selected carefully to maximise performance. When offloading...
OpenMP is a directive based shared-memory parallel programming model traditionally used for multicore CPUs. In its recent versions, OpenMP was extended to enable GPU computing via its “target
offloading” model. The architecture agnostic compiler directives can in principle offload to multiple types of GPUs and FPGAs, and its compiler support is under active development.
In this work, we...
The Wide Field-of-view Cerenkov Telescope Array (WFCTA) is an important component of Large High Altitude Air Shower Observatory (LHAASO), which aims to measure the individual energy spectra of cosmic rays from ~30 TeV to a couple of EeV. Since the experiment started running in 2020, WFCTA simulation jobs have been running on the Intel X86 cluster last year and only 25% of the first stage...
During the first WLCG Network Data Challenge in fall of 2021 some shortcomings in the monitoring that impeded the ability to fully understand the results collected during the data challenge were identified. One of the simplest missing components was site specific network information, especially information about traffic going into and out of any of the participating sites. Without this...
The ATLAS EventIndex system consists of the catalogue of all events collected, processed or generated by the ATLAS experiment at the CERN LHC accelerator, and all associated software tools. The new system, developed for LHC Run 3, makes use of Apache HBase - the Hadoop database - and Apache Phoenix - an SQL/relational database layer for HBase - to store and access all the event metadata. The...
Since 2009 Port d’Informació Científica (PIC), in Barcelona, has hosted the Major Atmospheric Gamma Imaging Cherenkov (MAGIC) Data Center. At the Observatorio del Roque de Los Muchachos (ORM) in the Canary Island of La Palma (Spain), data produced from observations by the 17m diameter MAGIC telescopes are transferred to PIC on a daily basis. More than 200 TB per year are being transferred to...
In this paper we describe the development of a streamlined framework for large-scale ATLAS pMSSM reinterpretations of LHC Run 2 analyses using containerised computational workflows. The project is looking to assess the global coverage of BSM physics and requires running O(5k) computational workflows representing pMSSM model points. Following ATLAS Analysis Preservation policies, many analyses...
Cutting edge research has driven scientists in many fields into the world of big data. While data storage technologies continue to evolve, the costs remain high for rapid data access on such scales and are a major factor in planning and operations. As a joint effort spanning experiment scientists, developers for ROOT, and industrial leaders in data compression, we sought to address this...
DIRAC is a widely used "framework for distributed computing". It works by providing a layer between users and computing resources by offering a common interface to a number of heterogeneous resource providers. DIRAC originally provided support for dynamic workload management on a cloud via its VMDIRAC extension. When the VMDIRAC extension was envisaged, it was common to use commercial clouds...
The Belle II experiment situated at the SuperKEKB energy-asymmetric $e^+e^-$ collider began operation in 2019. It has since recorded half of the data collected by its predecessor, and reached a world record instantaneous luminosity of $4.7\times 10^{34}$ cm$^{-2}$s$^{-1}$. For distinguishing decays with missing energy from background events at Belle II, the residual calorimeter energy measured...
The European energy crisis during 2022 prompted many computing facilities to take urgent electricity saving measures. In part voluntary, in response to EU and national appeals, but also to keep within a flat energy budget, as the electricity price rose. We review some of these measures and, as the situation normalises, take a longer view on how the flexibility of high throughput computing can...
With over 2000 active members from 174 institutes over 41 countries in the world, the ALICE experiment is one of the 4 large experiments at CERN. With such numerous interactions, the experiment management needs a way to record members participation history and their current status, such as employments, institutes, appointments, clusters and funding agencies, as well as to automatically...
The ATLAS experiment has 18+ years of experience using workload management systems to deploy and develop workflows to process and to simulate data on the distributed computing infrastructure. Simulation, processing and analysis of LHC experiment data require the coordinated work of heterogeneous computing resources. In particular, the ATLAS experiment utilizes the resources of 250 computing...
Over the past several years, the dCache collaboration has been working on developing a feature-rich, efficient and scalable data lifecycle and QoS service. With the ever-increasing volume of data anticipated by the LHC and Intensity Frontier experiments, their reliance on tape storage and the limited amount of disk cache available to them, this effort to provide for efficient staging of large...
dCache (https://dcache.org) is a highly scalable storage system providing location-independent access to data. The data are stored across multiple data servers as complete files presented to the end-user via a single-rooted namespace. From its inception, dCache has been designed as a caching disk buffer to a tertiary tape storage system with the assumption that the latter has virtually...
Every sub-detector in the ATLAS experiment at the LHC, including ATLAS
Calorimeter, writes conditions and calibration data into an ORACLE
Database (DB). In order to provide an interface for reliable interactions
both with the Conditions Online and Offline DBs a unique semiautomatic
web based application, TileCalibWeb Robot, has been developed. TileCalibWeb is being used in LHC Run 3 by...
The development of modern heterogeneous accelerators, such as GPUs, has boosted the prosperity of artificial intelligence (AI). Recent years have seen an increasing popularity of AI for the nuclear physics (AI4NP) domain. While most AI4NP studies focus on feasibility analysis, we target at their performance on the modern GPUs equipped with Tensor Cores.
We first benchmark the throughput...
This work shows the implementation of Artificial Intelligence models in track reconstruction
software for the CLAS12 detector at Jefferson Lab. The Artificial Intelligence-based approach resulted
in improved track reconstruction efficiency in high luminosity experimental conditions. The track
reconstruction efficiency increased by $10-12\%$ for a single particle, and statistics in...
new methodology to improve the sensitivity to new physics contributions to the Standard Model processes at LHC is presented.
A Variational AutoEncoder trained on Standard Model processes is used to identify Effective Field Theory contributions as anomalies. While the output of the model is supposed to be very similar to the inputs for Standard Model events, it is expected to deviate...
ATLAS Metadata Interface (AMI) is a generic ecosystem for metadata aggregation, transformation and cataloging. Benefiting from more than 20 years of feedback in the LHC context, the second major version was released in 2018. Each sub-system of the stack has recently be improved in order to acquire messaging/telemetry capabilities. This poster describes the whole stack monitoring with the...
VISPA (VISual Physics Analysis) realizes a scientific cloud enabling modern scientific data analysis in a web browser. Our local VISPA instance is backed by a small institute cluster and is dedicated to fundamental research and university education. By hardware upgrades (732 CPU threads, 29 workstation GPUs), we have tailored the cloud services to accomplish both, rapid turn-around when...
Particle identification at the Super Charm-Tau factory experiment will be provided by a Focusing multilayer Aerogel Ring Imaging CHerenkov detector FARICH. Due to hardware constraints the detector captures a great amount of noise which must be mitigated to reduce both a data flow and further storage space.
In this presentation we present our approach to filtering signal hits. The approach...
The Worldwide Large Hadron Collider Computing Grid (WLCG) actively pursues the migration from the protocol IPv4 to IPv6. For this purpose, the HEPiX-IPv6 working group was founded during the fall HEPiX Conference in 2010. One of the first goals was to categorize the applications running in the WLCG into different groups: the first group was easy to define, because it comprised of all...