Akademska digitalna zbirka SLovenije - logo
E-resources
Peer reviewed Open access
  • File-based data flow in the...
    Andre, J-M; Andronidis, A; Bawej, T; Behrens, U; Branson, J; Chaze, O; Cittolin, S; Darlea, G-L; Deldicque, C; Dobson, M; Dupont, A; Erhan, S; Gigi, D; Glege, F; Gomez-Ceballos, G; Hegeman, J; Holzner, A; Jimenez-Estupiñán, R; Masetti, L; Meijers, F; Meschi, E; Mommsen, R K; Morovic, S; Nunez-Barranco-Fernandez, C; O'Dell, V; Orsini, L; Paus, C; Petrucci, A; Pieri, M; Racz, A; Roberts, P; Sakulin, H; Schwick, C; Stieger, B; Sumorok, K; Veverka, J; Zaza, S; Zejdl, P

    Journal of physics. Conference series, 12/2015, Volume: 664, Issue: 8
    Journal Article, Conference Proceeding

    During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes are also generated in the form of small "documents" using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These "files" can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2.