The real-time systems of HEP experiments are presently highly distributed, possibly on heterogeneous CPUs. In many applications, there is an important need to make information available to a large ...number of other processes in a transparent way. For this purpose the “RPC-like” systems are not suitable, since most of them rely on polling from the client and one-to-one connections. DIM is a very powerful alternative to those systems. It provides a named space for processes to publish information (Publishers) and a very simple API for processes willing to use this information (Subscribers). It fully handles error recovery at the Publisher and Subscriber level, without additional software in the application. DIM is available on a large variety of platforms and operating systems with C and C++ bindings. It is presently used in several HEP experiments, while it was developed in the DELPHI experiment and is maintained at CERN. We shall present its capabilities and examples of its use in HEP experiments in domains ranging from simple data publishing to event transfer, process control or communication layer for an Experiment Control Package (SMI++). We shall also present prospectives for using it as communications layer for future experiment's control systems.
We obtain sharp weighted estimates for solutions of the equation ∂¯u=f in a lineally convex domain of finite type. Precisely we obtain estimates in the spaces Lp(Ω,δγ), δ being the distance to the ...boundary, with two different types of hypothesis on the form f: first, if the data f belongs to LpΩ,δΩγ, γ>-1, we have a mixed gain on the index p and the exponent γ; secondly we obtain a similar estimate when the data f satisfies an apropriate anisotropic Lp estimate with weight δΩγ+1. Moreover we extend those results to γ=-1 and obtain Lp(∂Ω) and BMO(∂Ω) estimates. These results allow us to extend the Lp(Ω,δγ)-regularity results for weighted Bergman projection obtained in Charpentier et al. (Complex Var Elliptic Equ 59(8):1070–1095, 2014) for convex domains to more general weights.
DIRAC: a community grid solution Tsaregorodtsev, A; Bargiotti, M; Brook, N ...
Journal of physics. Conference series,
07/2008, Letnik:
119, Številka:
6
Journal Article
Recenzirano
Odprti dostop
The DIRAC system was developed in order to provide a complete solution for using the distributed computing resources of the LHCb experiment at CERN for data production and analysis. It allows a ...concurrent use of over 10K CPUs and 10M file replicas distributed over many tens of sites. The sites can be part of a Computing Grid such as WLCG or standalone computing clusters all integrated in a single management structure. DIRAC is a generic system with the LHCb specific functionality incorporated through a number of plug-in modules. It can be easily adapted to the needs of other communities. Special attention is paid to the resilience of the DIRAC components to allow an efficient use of non-reliable resources. The DIRAC production management components provide a framework for building highly automated data production systems including data distribution and data driven workload scheduling. In this paper we give an overview of the DIRAC system architecture and design choices. We show how different components are put together to compose an integrated data processing system including all the aspects of the LHCb experiment - from the MC production and raw data reconstruction to the final user analysis.
In this paper, we give precise isotropic and non-isotropic estimates for the Bergman and Szegö projections of a bounded pseudoconvex domain whose boundary points are all of finite type and with ...locally diagonalizable Levi form. Additional local results on estimates of invariant metrics are also given.
DIRAC, the LHCb community Grid solution, was considerably reengineered in order to meet all the requirements for processing the data coming from the LHCb experiment. It is covering all the tasks ...starting with raw data transportation from the experiment area to the grid storage, data processing up to the final user analysis. The reengineered DIRAC3 version of the system includes a fully grid security compliant framework for building service oriented distributed systems; complete Pilot Job framework for creating efficient workload management systems; several subsystems to manage high level operations like data production and distribution management. The user interfaces of the DIRAC3 system providing rich command line and scripting tools are complemented by a full-featured Web portal providing users with a secure access to all the details of the system status and ongoing activities. We will present an overview of the DIRAC3 architecture, new innovative features and the achieved performance. Extending DIRAC3 to manage computing resources beyond the WLCG grid will be discussed. Experience with using DIRAC3 by other user communities than LHCb and in other application domains than High Energy Physics will be shown to demonstrate the general-purpose nature of the system.
In this paper, we give a precise description of the complex geometry of a pseudo-convex domain in
C
n
near a boundary point of finite type where the Levi form is locally diagonalizable, and we use it ...to obtain sharp size estimates for the Bergman kernel and its derivatives. When all points of the boundary are of that type, we deduce from those estimates the
L
p
regularity of the Bergman projection.
On donne une description précise de la géométrie complexe d'un domaine pseudo-convexe de
C
n
au voisinage d'un point du bord de type fini où la forme de Levi est localement diagonalisable. On utilise cette description pour établir des estimations fines du noyau de Bergman ainsi que de ses dérivées. De ces estimations on déduit la régularité
L
p
du projecteur de Bergman lorsque tous les points du bord ont la propriété considérée.
In the LHCb experiment a wide variety of Monte Carlo simulated samples needs to be produced for the experiment's physics program. Monte Carlo productions are handled centrally similarly to all ...massive processing of data in the experiment. In order to cope with the large set of different types of simulation samples, necessary procedures based on common infrastructures have been set up with a numerical event type identification code used throughout. The various elements in the procedure, from writing a configuration for an event type to deploying them on the production environment, from submitting and processing a request to retrieving the sample produced as well as the conventions established to allow their interplay will be described. The choices made have allowed a high level of automation of Monte Carlo productions that are handled centrally in a transparent way with experts concentrating on their specific tasks. As a result the massive Monte Carlo production of the experiment is efficiently processed on a world-wide distributed system with minimal manpower.
The LHCb Data Management System Baud, J P; Charpentier, Ph; Ciba, K ...
Journal of physics. Conference series,
01/2012, Letnik:
396, Številka:
3
Journal Article
Recenzirano
Odprti dostop
The LHCb Data Management System is based on the DIRAC Grid Community Solution. LHCbDirac provides extensions to the basic DMS such as a Bookkeeping System. Datasets are defined as sets of files ...corresponding to a given query in the Bookkeeping system. Datasets can be manipulated by CLI tools as well as by automatic transformations (removal, replication, processing). A dynamic handling of dataset replication is performed, based on disk space usage at the sites and dataset popularity. For custodial storage, an on-demand recall of files from tape is performed, driven by the requests of the jobs, including disk cache handling. We shall describe the tools that are available for Data Management, from handling of large datasets to basic tools for users as well as for monitoring the dynamic behavior of LHCb Storage capacity.
We obtain sharp weighted estimates for solutions of the equation $\partial$ u
= f in a lineally convex domain of finite type. Precisely we obtain estimates
in the spaces L p ($\Omega$,$\delta$ ...$\gamma$), $\delta$ being the distance to
the boundary, with two different types of hypothesis on the form f : first, if
the data f belongs to L p $\Omega$,$\delta$ $\gamma$ $\Omega$ , $\gamma$ > --1,
we have a mixed gain on the index p and the exponent $\gamma$; secondly we
obtain a similar estimate when the data f satisfies an apropriate anisotropic L
p estimate with weight $\delta$ $\gamma$+1 $\Omega$. Moreover we extend those
results to $\gamma$ = --1 and obtain L p ($\partial$ $\Omega$) and
BMO($\partial$ $\Omega$) estimates. These results allow us to extend the L p
($\Omega$,$\delta$ $\gamma$)-regularity results for weighted Bergman projection
obtained in CDM14b for convex domains to more general weights.