UNI-MB - logo
UMNIK - logo
 
E-resources
Peer reviewed Open access
  • Experience building and ope...
    Albert, M; Bakken, J; Bonacorsi, D; Brew, C; Charlot, C; Huang, Chih-Hao; Colling, D; Dumitrescu, C; Fagan, D; Fassi, F; Fisk, I; Flix, J; Giacchetti, L; Gomez-Ceballos, G; Gowdy, S; Grandi, C; Gutsche, O; Hahn, K; Holzman, B; Jackson, J; Kreuzer, P; Kuo, C M; Mason, D; Pukhaeva, N; Qin, G; Quast, G; Rossman, P; Sartirana, A; Scheurer, A; Schott, G; Shih, J; Tader, P; Thompson, R; Tiradani, A; Trunov, A

    Journal of physics. Conference series, 04/2010, Volume: 219, Issue: 7
    Journal Article

    The CMS Collaboration relies on 7 globally distributed Tier-1 computing centres located at large universities and national laboratories for a second custodial copy of the CMS RAW data and primary copy of the simulated data, data serving capacity to Tier-2 centres for analysis, and the bulk of the reprocessing and event selection capacity in the experiment. The Tier-1 sites have a challenging role in CMS because they are expected to ingest and archive data from both CERN and regional Tier-2 centres, while they export data to a global mesh of Tier-2s at rates comparable to the raw export data rate from CERN. The combined capacity of the Tier-1 centres is more than twice the resources located at CERN and efficiently utilizing this large distributed resources represents a challenge. In this article we will discuss the experience building, operating, and utilizing the CMS Tier-1 computing centres. We will summarize the facility challenges at the Tier-1s including the stable operations of CMS services, the ability to scale to large numbers of processing requests and large volumes of data, and the ability to provide custodial storage and high performance data serving. We will also present the operations experience utilizing the distributed Tier-1 centres from a distance: transferring data, submitting data serving requests, and submitting batch processing requests.