- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Please visit Jefferson Lab Event Policies and Guidance before planning your next event: https://www.jlab.org/conference_planning.
Meeting of the EIC Streaming Readout Consortium
Organizers E. Brash (CNU), C. Cuevas (JLab), M. Diefenthaler (JLab), G. Heyes (JLab)
Remote meeting via BlueJeans (Meeting ID 493409952):
Join via browser or dial in via phone: (888) 240- 2560 (see also international numbers).
Previous workshops (part of EIC Ad-hoc Meeting Series):
01/2018 Streaming Readout II
01/2017 Trigger/Streaming Readout
The afternoon session on Monday 3 December was completely filled with presentations that covered the session topics.
Ben Raydo from Jefferson Lab covered existing triggered DAQ systems at JLAB, and presented the existing performance for high trigger rates [100Khz] and large data rates approaching 1GB/s for Hall D during the fall 2018 run period. The existing JLAB DAQ front end modules were described and examples of the CLAS12 trigger system were explained in detail.
Designing the front end hardware using ASIC in a true streaming mode cannot have ASIC that require a trigger signal to begin digitization or to begin reading buffered data. There are several new ASIC companies including large commercial companies that have high speed waveform digitizers that will work in streaming mode. Ben gave examples where ASICs are used to readout existing 12GeV detectors, including possible ASICs that we know are in development.
Plans for a full VXS crate of 250Mhz 12bit ADC boards [256 channels] were explained and this test will show the full streaming performance including new software that will handle the new data flow. This test will be setup in the new INDRA lab area. If this test works well, the full CLAS12 DAq could be tested with this mode, and rough estimates show data rates could approach 50GB/s.
Fabrizio Ameli delivered a presentation that was focused on the new digitizer board that was inherited from the KM3Net experiment. The BDX experiment specifications were listed and this front end board contains 12 channels of 14-bit/250Msps ADC data. The board is designed with a trigger-less method and contains a time-variable window and includes a zero-suppression feature.
The board interfaces directly to front end SiPM and the bias voltages are generated on-board. There is a commercial System-On-Module[SOM] mezzanine card that contains a FPGA which is based on the Zynq-7045 from Xilinx. This SOM manages the fiber transceiver data including the Ethernet RJ45 and USB 2.0 interfaces. Other board features includes flexible timing resources including White Rabbit, and PLL clock interface that is shown to be highly stable with extremely low jitter to daisy-chain the timing signaling with multiple boards. Other high speed interfaces are provide also, such as SATA connections. Test results were presented for the bias voltage{HV] monitoring feature and clock timing jitter.
Future work will focus on timing resolution with waveform reconstruction, DMA implementation for higher throughput and developing slow controls for EPICS or other standards.
Ed Jastrzembski gave an update on the streaming DAQ test stand that the Jefferson Lab Data Acquisition Group is building. It uses the new SAMPA ASIC developed for the ALICE experiment at the LHC. The 32-channel SAMPA chip has sophisticated DSP features and is designed to be read out continuously. Jefferson Lab is interested in seeing how they can take advantage of this new technology and the continuous readout concept in their experiments.
Rather than design a system from scratch to support the SAMPA, the Jefferson Lab group chose to exactly copy the ALICE readout architecture. On a Front End Card (FEC) data from five SAMPAs is concentrated onto two high speed optical links (GBT) that connect to a Common Readout Unit (CRU). The links also carry timing and synchronization signals from the CRU to the FEC. A single CRU can receive data from up to 24 FECs. The ALICE designed CRU was not available so a functionally equivalent readout unit designed for the ATLAS experiment is used instead (FELIX CRU). The Jefferson Lab test stand is now assembled and consists of five ALICE FECs (800 channels) connected to a single FELIX CRU.
At this point the Jefferson Lab group is learning how to configure and read out data from the SAMPA chips. Their goal is to connect the system to a prototype GEM detector at the lab. They will study the SAMPA’s response to detector signals and find out how to best utilize the chip’s DSP features. They will also learn how to deal with the continuous data stream produced.
William Gu from Jefferson Labpresented recent test results using a Xilinx KCU1500 accelerator card in conjunction with a Dell host computer.
Further work in the next few months (or years): First: combine the above mentioned work (1) and (3), and stream the data to computer for streaming readout software development in the ASTRA lab. Second: do some performance tests (distributed clock jitters, synchronization precision etc.) using some existing hardware. Third: produce a (or several) new streaming TDC board to further test higher data throughput, clock distribution and synchronization, if budget permits.
Phaneendra Bikkina, from Alphacore presented the company history and specifications for several custom ASIC for nuclear physics experiments. Several of these ASIC projects are at the DOE STTR Phase II level, with parts delivered in January 2019 for initial testing. The chips will be qualified for performance of 12bit and 10bit performance at sample speeds of 100Msps and 50Msps respectively. Chips are fabricated with a 180nm CMOS process, which features radiation hardening for SEL, SEU and SEFI prevention. The Alphacore chips include charge sensitive preamplifiers, high speed ADCs and ADCs with JESD204B high speed readout.
New developments include a continuous ADC with 8 channels, 10bit, with sampling speeds up to 10Gsps on a 28nm CMOS fabrication. The timing resolution is impressive at <1ps and relatively low power for an eight channel device.
Jin Hyuang from BNL presented a very comprehensive review of the sPHENIX design study with a view toward using the same electronics model for an EIC interaction area. The sPHENIX detector includes 7 large detector systems ranging from the inner tracker to the outer HCal. These different detector types use different technologies and interface front end boards that can be readout using the FELIX2 PCIe DAq board to stream data from the different detectors. The front end interface chips are SAMPA for the TPC, and pixel sensors for the MVTX. This offers a very flexible but common method to provide a readout interface independent of the front end technology.
Jin presented a sPHENIX - EIC DAQ strategy based on the PCIe-based FPGA card [FELIX2] with full streaming data as the main concept to acquire all the front end data. The FELIX2 board has the capabilities to manage the front end data and is built on a commercial [PCIe] standard for computing hardware.
Recent test stand results were presented for readout of a GEM + zigzag pad with the SAMPA front end and FELIX2 DAQ. Special SAMPA chips are requested for an 80ns shaping time and testing will continue as part of an R&D EIC project.
Summary points are copied as follows:
The presentations offered a wide view of the different hardware projects that are either extensions of previous work [JLAB 12GeV], or developments for new experiments that have streaming readout as the main focus[sPHENIX].
We had contributions from a commercial ASIC company, Alphacore. This growing company is dealing directly with our Physics community to develop modern high speed digitizer chips for front end systems that will include high speed serial readout options.
For an EIC detector, the streaming readout format is clearly needed, and our efforts now continue to emphasize the details and definitions. For example, holding data in the front end electronics memory until a ‘trigger’ signal is received, is a concept that should be well understood as not needed for a true streaming readout definition. Data suppression for high channel count detectors close to the virtex area may indeed need different control to begin streaming, but holding data and waiting for a ‘trigger’ is not desired. The data rates from each of the EIC detector sub-systems has not been completely defined nor simulated, but these studies continue and the hardware specifications presented at this meeting show that EIC data rates are within our capabilities.
Distribution of timing synchronization and precision clock signaling is of course required, but it is not clear yet on the best method of distribution nor the resolution and stability of these signals. Higher sampling ADCs will demand extremely low jitter [sub-ps], so close consideration of timing/synch distribution systems will be essential.
There was discussion about adding definitions for the essential signaling of timing and synchronization to the draft EIC User Handbook. We have plenty of information for the DAQ section and hopefully we can add this information soon for review.
The first talk, a combination of talks on "Intermediate network" and "MPEG program stream" was given by Dr. Bernauer. Any streaming network can be logically represented by a data flow network, connecting data "sources" (detectors) to "filters" which receive one or more data streams, process them, and produce one or more new streams, and data "sinks", which receive data for display or storage. The talk took "stream merging" as a common example for a filter and discussed different merging approaches.
In the second part, the OSI network layer was discussed with example solutions for each layer. The presentation layer is identified as the layer a common protocol has to be defined for to allow inter-operability. Lower layers can be implementation dependent, but the talk recommends ethernet and TCP/IP as a good default.
The remainder of the talk discussed the MPEG standard. While not directly usable as a presentation layer for a streaming readout, some design decisions are applicable, ensuring efficient implementations in software and hardware.
The discussion focused mainly on the question of congestion, i.e. how a situation should be handled in which the data transport network is too slow and data has to be discarded. No clear solution was found and this points warrants for investigation in the future.
In the second talk, Dr. Blyth discussed ProIO, an event-based I/O stream format based on ProtoBuf. ProIO is possible contender for a presentation layer. The use of ProtoBuf makes interfacing with many languages easy, as template accessor functions and data structures can be generated automatically.
He compared speed and data size with ROOT, ProIO is on-par or exceeds the speed of root in the selected benchmark.
In the discussion, mainly features and (few) constraints of ProIO were discussed.
The question of presentation layer is important, and the required feature set is not quite clear. Markus Diefenthaler and Jan Bernauer will develop a protoype to gain experience and further insight into the requirements.