The Isotope and Muon Production using Advanced-Cyclotron and Target Technology Project (IMPACT) foresees the introduction of two new target stations and three new beamlines: one for radionuclide ...production and two for surface muon production. The latter forms the project, High-Intensity Muon Beams (HIMB), which plans to increase the muon rate from the current world record of 10 8 µ + /s up to 10 10 µ + /s. This work presents an overview of the future HIMB beamlines focusing on the magnet designs that have been developed to ensure increased muon rate production and transmission. Specific radiation-hard resistive coils, based on mineral insulation, are required in this case due to the proximity to the target station. A high muon capture and transmission efficiency requires solenoid-like magnets, as well as dipole magnets and crossed-field separators to select the desired particles, while suppressing unwanted background particles. The radiation-hard capture solenoid plays the most important role in the whole beamline since it must provide a high capture efficiency. Beam optics studies provided the on-axis field profile necessary for optimizing the size and shape of the capture solenoid. Therefore, the article will also elucidate on these solenoid design strategies for achieving the desired capture efficiency.
•Fast and robust search algorithm for selection of set of variables that contribute most to fault.•Quasi-optimal result, compared to exhaustive search for synthetic data.•Experimental confirmation of ...supremacy of multivariate contribution analysis v. univariate contribution analysis.•Highly correlated result for the Tennessee-Eastman chemical process simulator.•Framework usable for non-linear, multivariate contribution analysis.
This paper presents a multivariate linear contribution analysis in the context of fault detection, isolation and diagnosis. The usually univariate contribution analysis in fault isolation is improved by the use of feature selection. The fault index and the individual contributions of the variables are calculated by Probabilistic Principal Component Analysis. A new and more efficient method is proposed to select the most decisive variables that contribute to the fault. Experiments are conducted with illustrative synthetic benchmarks and the Tennessee Eastman chemical plant simulator. Among the multivariate selection searches, the Sequential Backward and Forward search shows an optimized equilibrium between the quality of the selected set of contributing variables and the computational burden, compared to an exhaustive and Branch & Bound search.
We give the analytical definitions of the Chernoff, Bhattacharyya and Jeffreys–Matusita probabilistic distances between two Dirichlet distributions and two Beta distributions as its special case. For ...all other known probabilistic distances we show their inappropriateness in the analytical case. We discuss the parameter learning of the Dirichlet distribution from a finite sample set and present an application for split-and-merge image segmentation.
The construction of efficient parallel programs usually requires expert knowledge in the application area and a deep insight into the architecture of a specific parallel machine. Often, the resulting ...performance is not portable, i.e., a program that is efficient on one machine is not necessarily efficient on another machine with a different architecture. Transformation systems provide a more flexible solution. They start with a specification of the application problem and allow the generation of efficient programs for different parallel machines. The programmer has to give an exact specification of the algorithm expressing the inherent degree of parallelism and is released from the low-level details of the architecture. We propose such a transformation system with an emphasis on the exploitation of the data parallelism combined with a hierarchically organized structure of task parallelism. Starting with a specification of the maximum degree of task and data parallelism, the transformations generate a specification of a parallel program for a specific parallel machine. The transformations are based on a cost model and are applied in a predefined order, fixing the most important design decisions like the scheduling of independent multitask activations, data distributions, pipelining of tasks, and assignment of processors to task activations. We demonstrate the usefulness of the approach with examples from scientific computing.
This book offers broad coverage of all aspects of parallel programming. Many examples and exercises are provided to show how to apply the techniques. There is special emphasis on runtime efficiency ...and memory organization.
The MUon Scattering Experiment, MUSE, at the Paul Scherrer Institute, Switzerland, investigates the proton charge radius puzzle, lepton universality, and two-photon exchange, via simultaneous ...measurements of elastic muon-proton and electron-proton scattering. The experiment uses the PiM1 secondary beam channel, which was designed for high precision pion scattering measurements. We review the properties of the beam line established for pions. We discuss the production processes that generate the electron and muon beams, and the simulations of these processes. Simulations of the π/μ/e beams through the channel using TURTLE and G4beamline are compared. The G4beamline simulation is then compared to several experimental measurements of the channel, including the momentum dispersion at the intermediate focal plane and target, the shape of the beam spot at the target, and timing measurements that allow the beam momenta to be determined. Finally, we conclude that the PiM1 channel can be used for high precision π, μ, and e scattering.
/sup T/ask pools are data structures for the dynamic distribution of work to processors. This paper compares several realizations of task pools resulting from different internal organizations such as ...shared or distributed organizations as well as a combination of them. The effect of different memory managers is also considered. The paper gives a detailed comparison of the resulting performance for task pools implemented in C with POSIX threads for selected irregular applications on current multiprocessor machines.
We consider the task-based execution of parallel irregular applications, which are characterized by an unpredictable computational structure induced by the input data. The dynamic load balancing ...required to execute such applications efficiently can be provided by task pools. Thus, the performance of a task-based irregular application is tightly coupled to the scalability and the overhead of the task pool used to execute it. In order to reduce this overhead this article considers the use of the hardware-specific synchronization operations compare & swap and load & reserve/store conditional. We present several different realizations of task pools using these operations. Runtime experiments on two shared-memory machines, a SunFire 6800 and an IBM p690, show that the new implementations obtain a significantly higher performance than implementations relying on the POSIX thread library for synchronization.
Energy-Aware Execution of Fork-Join-Based Task Parallelism Rauber, T.; Runger, G.
2012 IEEE 20th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems,
2012-Aug.
Conference Proceeding
In this article, we use an analytical energy model based on frequency scaling to model the energy consumption of tasks in a fork-join pattern of parallelism. In particular, tasks that may be executed ...concurrently to each other are considered, and the resulting energy consumption for different processor assignments is investigated. Frequency scaling factors that lead to a minimum energy consumption are derived and used in task-based scheduling algorithms. An experimental evaluation provides simulations for a large number of randomly generated task sets as well as energy measurements on a Intel Sandy Bridge architecture using a complex application from numerical analysis.