The angiotensin II type 1 receptor (AT1R) is an emerging target of functional non‐HLA antibodies (Ab). We examined the potential of determining the degree of presensitization against AT1R as a risk ...factor for graft survival and acute rejection (AR). The study included 599 kidney recipients between 1998 and 2007. Serum samples were analyzed in a blinded fashion for anti‐AT1R antibodies (AT1R‐Abs) using a quantitative solid‐phase assay. A threshold of AT1R‐Ab levels was statistically determined at 10 U based on the time to graft failure. An extended Cox model determined risk factors for occurrence of graft failure and a first AR episode. AT1R‐Abs >10 U were detected in 283 patients (47.2%) before transplantation. Patients who had a level of AT1R‐Abs >10 U had a 2.6‐fold higher risk of graft failure from 3 years posttransplantation onwards (p = 0.0005) and a 1.9‐fold higher risk of experiencing an AR episode within the first 4 months of transplantation (p = 0.0393). Antibody‐mediated rejection (AMR) accounted for 1/3 of AR, whereby 71.4% of them were associated with >10 U of pretransplant AT1R‐Abs. Pretransplant anti‐AT1R‐Abs are an independent risk factor for long‐term graft loss in association with a higher risk of early AR episodes.
The authors find that a high serum level of anti‐angiotensin II type 1 receptor antibodies in kidney transplant recipients before transplantation is an independent risk factor for long‐term graft loss in association with a higher risk of early acute rejection episodes. See related paper by Taniguchi et al (page 2577) and editorial by Tinckam and Campbell (page 2515).
PODIO is a C++ library that supports the automatic creation of event data models (EDMs) and efficient I/O code for HEP experiments. It is developed as a new EDM Toolkit for future particle physics ...experiments in the context of the AIDA2020 EU programme. Experience from LHC and the linear collider community shows that existing solutions partly suffer from overly complex data models with deep object-hierarchies or unfavorable I/O performance. The PODIO project was created in order to address these problems. PODIO is based on the idea of employing plain-old-data (POD) data structures wherever possible, while avoiding deep object-hierarchies and virtual inheritance. At the same time it provides the necessary high-level interface towards the developer physicist, such as the support for inter-object relations and automatic memory-management, as well as a Python interface. To simplify the creation of efficient data models PODIO employs code generation from a simple yaml-based markup language. In addition, it was developed with concurrency in mind in order to support the use of modern CPU features, for example giving basic support for vectorization techniques.
Gaudi Evolution for Future Challenges Clemencic, M; Hegner, B; Leggett, C
Journal of physics. Conference series,
10/2017, Letnik:
898, Številka:
4
Journal Article
Recenzirano
Odprti dostop
The LHCb Software Framework Gaudi was initially designed and developed almost twenty years ago, when computing was very different from today. It has also been used by a variety of other experiments, ...including ATLAS, Daya Bay, GLAST, HARP, LZ, and MINERVA. Although it has been always actively developed all these years, stability and backward compatibility have been favoured, reducing the possibilities of adopting new techniques, like multithreaded processing. R&D efforts like GaudiHive have however shown its potential to cope with the new challenges. In view of the LHC second Long Shutdown approaching and to prepare for the computing challenges for the Upgrade of the collider and the detectors, now is a perfect moment to review the design of Gaudi and plan future developments of the project. To do this LHCb, ATLAS and the Future Circular Collider community joined efforts to bring Gaudi forward and prepare it for the upcoming needs of the experiments. We present here how Gaudi will evolve in the next years and the long term development plans.
HEP experiments produce enormous data sets at an ever-growing rate. To cope with the challenge posed by these data sets, experiments' software needs to embrace all capabilities modern CPUs offer. ...With decreasing memory core ratio, the one-process-per-core approach of recent years becomes less feasible. Instead, multi-threading with fine-grained parallelism needs to be exploited to benefit from memory sharing among threads. Gaudi is an experiment-independent data processing framework, used for instance by the ATLAS and LHCbexperiments at CERN's Large Hadron Collider. It has originally been designed with only sequential processing in mind. In a recent effort, the frame work has been extended to allow for multi-threaded processing. This includes components for concurrent scheduling of several algorithms - either processingthe same or multiple events, thread-safe data store access and resource management. In the sequential case, the relationships between algorithms are encoded implicitly in their pre-determined execution order. For parallel processing, these relationships need to be expressed explicitly, in order for the scheduler to be able to exploit maximum parallelism while respecting dependencies between algorithms. Therefore, means to express and automatically track these dependencies need to be provided by the framework. In this paper, we present components introduced to express and track dependencies of algorithms to deduce a precedence-constrained directed acyclic graph, which serves as basis for our algorithmically sophisticated scheduling approach for tasks with dynamic priorities. We introduce an incremental migration path for existing experiments towards parallel processing and highlight the benefits of explicit dependencies even in the sequential case, such as sanity checks and sequence optimization by graph analysis.
Preparing HEP software for concurrency Clemencic, M; Hegner, B; Mato, P ...
Journal of physics. Conference series,
01/2014, Letnik:
513, Številka:
5
Journal Article
Recenzirano
Odprti dostop
The necessity for thread-safe experiment software has recently become very evident, largely driven by the evolution of CPU architectures towards exploiting increasing levels of parallelism. For ...high-energy physics this represents a real paradigm shift, as concurrent programming was previously only limited to special, well-defined domains like control software or software framework internals. This paradigm shift, however, falls into the middle of the successful LHC programme and many million lines of code have already been written without the need for parallel execution in mind. In this paper we have a closer look at the offline processing applications of the LHC experiments and their readiness for the many-core era. We review how previous design choices impact the move to concurrent programming. We present our findings on transforming parts of the LHC experiment reconstruction software to thread-safe code, and the main design patterns that have emerged during the process. A plethora of parallel-programming patterns are well known outside the HEP community, but only a few have turned out to be straightforward enough to be suited for non-expert physics programmers. Finally, we propose a potential strategy for the migration of existing HEP experiment software to the many-core era.
Experience with the CMS event data model Elmer, P; Hegner, B; Sexton-Kennedy, L
Journal of physics. Conference series,
04/2010, Letnik:
219, Številka:
3
Journal Article
Recenzirano
Odprti dostop
The re-engineered CMS EDM was presented at CHEP in 2006. Since that time we have gained a lot of operational experience with the chosen model. We will present some of our findings, and attempt to ...evaluate how well it is meeting its goals. We will discuss some of the new features that have been added since 2006 as well as some of the problems that have been addressed. Also discussed is the level of adoption throughout CMS, which spans the trigger farm up to the final physics analysis. Future plans, in particular dealing with schema evolution and scaling, will be discussed briefly.
The CMS experiment is expected to start data taking during 2008, and large data samples, of the peta-bytes scale, will be produced each year. The CMS Physics Tools package provides the CMS physicist ...with a powerful and flexible software layer for analysis of these huge datasets that is well integrated in the CMS experiment software. C++ generic programming is used to allow simple extensions of analysis tools. A core part of this package is the Candidate Model providing a coherent interface to different types of data. Standard tasks such as combinatorial analyses, generic cuts, MC truth matching and constrained fitting are supported. Advanced template techniques enable the user to add missing features easily. We explain the underlying model, certain details of the implementation and present some use cases showing how the tools are currently used in generator and full simulation studies as preparation for analysis of real data.
The Generator Services (GENSER) provide ready-to-use Monte Carlo generators, compiled on multiple platforms or ready to be compiled, for the LHC experiments. In this paper we discuss the recent ...developments in the build machinery, which allowed to fully automatize the installation process. The new system is based on and is integrated entirely with the "LCG external software" infrastructure, providing all the external packages needed by the LHC experiments.
The CMS (Compact Muon Solenoid) experiment is one of two large general-purpose particle physics detectors at the LHC (Large Hadron Collider). An international collaboration of nearly 3500 people ...operates this complex detector whose main goal is to answer the most fundamental questions about our universe. The size and globally diversified nature of the collaboration and the Petabytes/year of data being collected, presents a big challenging task in bringing users up to speed to contribute to the physics analysis. The CMS User Support performs this task by helping users quickly learn about the CMS computing and the needed physics analysis tools. In this presentation we give an overview of its goals, organization and usage of collaborative tools to maintain the software and computing documentation and conduct year around tutorials on several physics tools needed as a pre-requisite for physics. We also talk about the user feedback evaluating its work.