Please visit Jefferson Lab Event Policies and Guidance before planning your next event: https://www.jlab.org/conference_planning.

May 8 – 12, 2023
Norfolk Waterside Marriott
US/Eastern timezone

Deploying and Running Ceph Clusters for Analysis Facilities

Not scheduled
1h
Hampton Roads Ballroom and Foyer Area (Norfolk Waterside Marriott)

Hampton Roads Ballroom and Foyer Area

Norfolk Waterside Marriott

235 East Main Street Norfolk, VA 23510
Poster Poster Poster Session

Speaker

Appleyard, Rob (UKRI STFC)

Description

The RAL Scientific Computing Department provides support for several large experimental facilities. These include, among others, the ISIS neutron spallation source, the Diamond X-Ray Synchrotron, the Rosalind Franklin Institute, and the RAL Central Laser Facility. We use a number of Ceph storage clusters to support the diverse requirements of these users.

These include Deneb, a petabyte-scale CephFS cluster, Sirius, a pure-NVMe cluster used to provide the underlying storage for STFC’s private cloud, SWIFT and S3 storage on our WLCG-focussed Echo cluster, and Arided, a new SSD cluster providing mountable CephFS storage to our private cloud. While all of these services use Ceph to provision the storage, each has a different architecture and usage profile. In particular, Arided has been deployed with the 'cephadm' cluster management system, a first at RAL.

This paper will provide an outline of these services, their development and deployment, how they are used, their hardware requirements and loadings, and our experiences of supporting them as production services. We will discuss our experiences with the cephadm system. We will also discuss the expected development roadmaps for these services for the remainder of 2023 and going into 2024.

Consider for long presentation Yes

Primary author

Appleyard, Rob (UKRI STFC)

Co-authors

Presentation materials

There are no materials yet.