Speaker
Description
The CMS data acquisition (DAQ) is implemented as a service-oriented architecture where DAQ applications, as well as general applications such as monitoring and error reporting, are run as self-contained services. The task of deployment and operation of services is achieved by using several heterogeneous facilities, custom configuration data and scripts in several languages. Deployment of all software is carried out by installation of rpms through Puppet management system on physical and virtual machines in computer network. Two main approaches are used to operate and control the life cycle of the different services: short-lived services, such as event building and read-out, are managed using a custom-built infrastructure, while auxiliary, long-running services are managed using systemd. In this work, we restructure the existing system into a homogeneous, scalable cloud architecture adopting a uniform paradigm where all applications are orchestrated in a uniform environment with standardized facilities. In this new paradigm DAQ applications are organized as groups of containers and the required software is packaged into container images. Automation of all aspects of coordinating and managing containers is provided by the Kubernetes environment, where a set of physical and virtual machines is unified in a single pool of compute resources. As opposed to the current system, different versions of the software, including operating system, libraries, and their dependencies, can coexist within the same network host, and can be installed in container images prepared at build time with no need of applying software changes on target machines. In this work we demonstrate that a container-based cloud architecture provides an across-the-board solution that can be applied for DAQ in CMS. We show strengths and advantages of running DAQ applications in a container infrastructure as compared to a traditional application model.
Consider for long presentation | No |
---|