TriggerDAQIntro

From Mu2eWiki
Revision as of 15:51, 17 November 2017 by Rlc (talk | contribs) (Created page with "==Introduction== The Mu2e Trigger and Data Acquisition (DAQ) subsystem provides necessary components for the collection of digitized data from the Tracker, Calorimeter, Cosmi...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Introduction

The Mu2e Trigger and Data Acquisition (DAQ) subsystem provides necessary components for the collection of digitized data from the Tracker, Calorimeter, Cosmic Ray Veto and Beam Monitoring systems, and delivery of that data to online and offline processing for analysis and storage. It is also responsible for detector synchronization, control, monitoring, and operator interfaces.

Requirements

The Mu2e Collaboration has developed a set of requirements for the Trigger and Data Acquisition System [1]. The DAQ must monitor, select, and validate physics and calibration data from the Mu2e detector for final stewardship by the offline computing systems. The DAQ must combine information from ~500 detector data sources and apply filters to reduce the average data volume by a factor of at least 100 before it can be transferred to offline storage.

The DAQ must also provide a timing and control network for precise synchronization and control of the data sources and readout, along with a detector control system (DCS) for operational control and monitoring of all Mu2e subsystems. DAQ requirements are based on the attributes listed below.

  • Beam Structure
    The beam timing is shown in Figure 1. Beam is delivered to the detector during the first 492 msec of each Supercycle. During this period there are eight 54 msec spills, and each spill contains approximately 32,000 “micro-bunches”, for a total of 256,000 micro-bunches in a 1.33 second Supercycle. A micro-bunch period is 1695 ns. Readout Controllers store data from the digitizers during the “live gate”. The live gate width is programmable, but is nominally the last 1000 ns of each micro-bunch period.
  • Data Rate
    The detector will generate an estimated 120 KBytes of zero-suppressed data per micro bunch, for an average data rate of ~70 GBytes/sec when beam is present. To reduce DAQ bandwidth requirements, this data is buffered in Readout Controller (ROC) memory during the spill period, and transmitted to the DAQ over the full Supercycle.
  • Detectors
    The DAQ system receives data from the subdetectors listed below.
    • Calorimeter – 1860 crystals in 2 disks. There are 240 Readout Controllers located inside the cryostat. Each crystal is connected to two avalanche photodiodes (APDs). The readout produces approximately 25 ADC values (12 bits each) per hit.
    • Cosmic Ray Veto system – 10,304 scintillating fibers connected to 18,944 Silicon Photomultipliers (SiPMs). There are 296 front-end boards (64 channels each), and 15 Readout Controllers. The readout generates approximately 12 bytes for each hit. CRV data is used in the offline reconstruction, so readout is only necessary for timestamps that have passed the tracker and calorimeter filters. The average rate depends on threshold settings.
    • Extinction and Target Monitors – monitors will be implemented as standalone systems with local processing. Summary information will be forwarded to the DAQ for inclusion in the run conditions database and optionally in the event stream
    • Tracker – 23,040 straw tubes, with 96 tubes per “panel”, 12 panels per “station” and 20 stations total. There are 240 Readout Controllers (one for each panel) located inside the cryostat. Straw tubes are read from both ends to determine hit location along the wire. The readout produces two TDC values (16 bits each) and typically six ADC values (10 bits each) per hit. The ADC values are the analog sum from both ends of the straw.
  • Processing
    The DAQ system provides online processing to perform calorimeter and tracker filters. The goal of these filters is to reduce the data rate by a factor of at least 100, limiting the offline data storage to less than 7 PetaByte/year. Based on preliminary estimates, the online processing requirement is approximately 30 TeraFLOPS.
  • Environment
    The DAQ system will be located in the surface level electronics room in the Mu2e Detector Hall and connected to the detector by optical fiber. There are no radiation or temperature issues. The DAQ will however, be exposed to a magnetic fringe field from the detector solenoid at a level of ~20-30 Gauss.
Figure 1. Mu2e Beam Structure.


Technical Design

The Mu2e DAQ is based on a “streaming” readout. This means that all detector data is digitized, zero-suppressed in front-end electronics, and then transmitted off the detector to the DAQ system. While this approach results in a higher off-detector data rate, it also provides greater flexibility in data analysis and filtering, as well as a simplified architecture.

The Mu2e DAQ architecture is further simplified by the integration of all off-detector components in a “DAQ Server” that functions as a centralized controller, data collector and data processor. A single DAQ Server can be used as a complete standalone data acquisition/processing system or multiple DAQ Servers can be connected together to form a highly scalable system.

To reduce development costs, the system design is based almost entirely on commercial hardware and, wherever possible, software from previous DAQ development and open source efforts.

Figure 2. Mu2e DAQ Architecture

[1] Tschirhart, R., “Trigger and DAQ Requirements,” Mu2e-doc-1150