New types of specialized network applications are being created that need to be able to transmit large amounts of data across dedicated network links. TCP fails to be a suitable method of bulk data ...transfer in many of these applications, giving rise to new classes of protocols designed to circumvent TCP's shortcomings. It is typical in these high-performance applications, however, that the system hardware is simply incapable of saturating the bandwidths supported by the network infrastructure. When the bottleneck for data transfer occurs in the system itself and not in the network, it is critical that the protocol scales gracefully to prevent buffer overflow and packet loss. It is therefore necessary to build a high-speed protocol adaptive to the performance of each system by including a dynamic performance-based flow control. This paper develops such a protocol, performance adaptive UDP (henceforth PA-UDP), which aims to dynamically and autonomously maximize performance under different systems. A mathematical model and related algorithms are proposed to describe the theoretical basis behind effective buffer and CPU management. A novel delay-based rate-throttling model is also demonstrated to be very accurate under diverse system latencies. Based on these models, we implemented a prototype under Linux, and the experimental results demonstrate that PA-UDP outperforms other existing high-speed protocols on commodity hardware in terms of throughput, packet loss, and CPU utilization. PA-UDP is efficient not only for high-speed research networks, but also for reliable high-performance bulk data transfer over dedicated local area networks where congestion and fairness are typically not a concern.
Rotation is ubiquitous at each step of stellar evolution, from star formation to the final stages, and it affects the course of evolution, the timescales and nucleosynthesis. Stellar rotation is also ...an essential prerequisite for the occurrence of Gamma-Ray Bursts.In this book the author thoroughly examines the basic mechanical and thermal effects of rotation, their influence on mass loss by stellar winds, the effects of differential rotation and its associated instabilities, the relation with magnetic fields and the evolution of the internal and surface rotation. Further, he discusses the numerous observational signatures of rotational effects obtained from spectroscopy and interferometric observations, as well as from chemical abundance determinations, helioseismology and asteroseismology, etc.On an introductory level, this book presents in a didactical way the basic concepts of stellar structure and evolution in "track 1" chapters. The other more specialized chapters form an advanced course on the graduate level and will further serve as a valuable reference work for professional astrophysicists.
In Software-as-a-Service, multiple tenants are typically consolidated into the same database instance to reduce costs. For analytics-as-a-service, in-memory column databases are especially suitable ...because they offer very short response times. This paper studies the automation of operational tasks in multi-tenant in-memory column database clusters. As a prerequisite, we develop a model for predicting whether the assignment of a particular tenant to a server in the cluster will lead to violations of response time goals. This model is then extended to capture drops in capacity incurred by migrating tenants between servers. We present an algorithm for moving tenants around the cluster to ensure that response time goals are met. In so doing, the number of servers in the cluster may be dynamically increased or decreased. The model is also extended to manage multiple copies of a tenant's data for scalability and availability. We validated the model with an implementation of a multi-tenant clustering framework for SAP's in-memory column database TREX.
The increasingly large demand for data storage has spurred on the development of systems that rely on the aggregate performance of multiple hard drives. In many of these applications, reliability and ...availability are of utmost importance. It is therefore necessary to closely scrutinize a complex storage system's reliability characteristics. In this paper, we use Markov models to rigorously demonstrate the effects that failure prediction has on a system's mean time to data loss (MTTDL) given a parameterized sensitivity. We devise models for a single hard drive, RAID1, and N+1 type RAID systems. We find that the normal SMART failure prediction system has little impact on the MTTDL, but striking results can be seen when the sensitivity of the predictor reaches 0.5 or more. In past research, machine learning techniques have been proposed to improve SMART, showing that sensitivity levels of 0.5 or more are possible by training on past SMART data alone. The results of our stochastic models show that even with such relatively modest predictive power, these failure prediction algorithms can drastically extend the MTTDL of a data storage system. We feel that these results underscore the importance and need for complex prediction systems when calculating impending hard drive failures.
In this paper, we present a novel coding scheme that can tolerate up to two-disk failures, satisfying the RAID-6 property. Our coding scheme, Code-M, is a non-MDS (Maximum Distance Separable, ...tolerating maximum failures with a given amount of redundancy) code that is optimized by trading rate for fast recovery times. Code-M is lowest density and its parity chain length is fixed at 2C - 1 for a given number of columns in a strip-set C. The rate of Code-M, or percentage of disk space occupied by non-parity data, is (C - 1)/C. We perform theoretical analysis and evaluation of the coding scheme under different configurations. Our theoretical analysis shows that Code-M has favorable reconstruction times compared to RDP, another well-established RAID-6 code. The quantitative comparisons of Code-M against RDP demonstrate recovery performance improvement by a factor of up to 5.18 under single disk failure and 2.8 under double failures using the same number of disks. Overall, Code-M is a RAID-6 type code supporting fast recovery with reduced I/O complexity.
The CPLEAR electromagnetic calorimeter Adler, R.; Backenstoss, G.; Bal, F. ...
Nuclear instruments & methods in physics research. Section A, Accelerators, spectrometers, detectors and associated equipment,
05/1997, Letnik:
390, Številka:
3
Journal Article
Recenzirano
Odprti dostop
A large-acceptance lead/gas sampling electromagnetic calorimeter (ECAL) was constructed for the CPLEAR experiment to detect photons from decays of π
0s with momentum
p
π
0 ≤ 800 MeV/
c. The main ...purpose of the ECAL is to determine the decay vertex of neutral-kaon decays K
0 → π
0π
0 → 4γ and K
0 → π
0π
0π
0 → 6γ. This requires a position-sensitive photon detector with high spatial granularity in
r−,
ϕ−, and
z−coordinates. The ECAL - a barrel without end-caps located inside a magnetic field of 0.44 T - consists of 18 identical concentric layers. Each layer of
1
3
radiation length (
X
0) contains a converter plate followed by small cross-section high-gain tubes of 2640 mm active length which are sandwiched by passive pick-up strip plates. The ECAL, with a total of 6
X
0 has an energy resolution of
α(E)
E
≈
13%
(E(
GeV)
and a position resolution of 4.5 mm for the shower foot. The shower topology allows separation of electrons from pions. The design, construction, read-out electronics, and performance of the detector are described.