Efficiently utilizing the rapidly increasing concurrency of multi-petaflop computing systems is a significant programming challenge. One approach is to structure applications with an upper layer of ...many loosely coupled coarse-grained tasks, each comprising a tightly-coupled parallel function or program. Many-task programming models such as functional parallel dataflow may be used at the upper layer to generate massive numbers of tasks, each of which generates significant tightly coupled parallelism at the lower level through multithreading, message passing, and/or partitioned global address spaces. At large scales, however, the management of task distribution, data dependencies, and intertask data movement is a significant performance challenge. In this work, we describe Turbine, a new highly scalable and distributed many-task dataflow engine. Turbine executes a generalized many-task intermediate representation with automated self-distribution and is scalable to multi-petaflop infrastructures. We present here the architecture of Turbine and its performance on highly concurrent systems.
Using MPI Gropp, William; Lusk, Ewing; Skjellum, Anthony
1999
eBook
The Message Passing Interface (MPI) specification is widely used for solving significant scientific and engineering problems on parallel computers. There exist more than a dozen implementations on ...computer platforms ranging from IBM SP-2 supercomputers to clusters of PCs running Windows NT or Linux ("Beowulf" machines). The initial MPI Standard document, MPI-1, was recently updated by the MPI Forum. The new version, MPI-2, contains both significant enhancements to the existing MPI core and new features.Using MPI is a completely up-to-date version of the authors' 1994 introduction to the core functions of MPI. It adds material on the new C++ and Fortran 90 bindings for MPI throughout the book. It contains greater discussion of datatype extents, the most frequently misunderstood feature of MPI-1, as well as material on the new extensions to basic MPI functionality added by the MPI-2 Forum in the area of MPI datatypes and collective operations. Using MPI-2 covers the new extensions to basic MPI. These include parallel I/O, remote memory access operations, and dynamic process management. The volume also includes material on tuning MPI applications for high performance on modern MPI implementations.
Using MPI Gropp, William; Lusk, Ewing; Skjellum, Anthony
2014, 2014-11-07
eBook
This book offers a thoroughly updated guide to the MPI (Message-Passing Interface) standard library for writing programs for parallel computers. Since the publication of the previous edition of Using ...MPI , parallel computing has become mainstream. Today, applications run on computers with millions of processors; multiple processors sharing memory and multicore processors with multiple hardware threads per core are common. The MPI-3 Forum recently brought the MPI standard up to date with respect to developments in hardware capabilities, core language evolution, the needs of applications, and experience gained over the years by vendors, implementers, and users. This third edition of Using MPI reflects these changes in both text and example code. The book takes an informal, tutorial approach, introducing each concept through easy-to-understand examples, including actual code in C and Fortran. Topics include using MPI in simple programs, virtual topologies, MPI datatypes, parallel libraries, and a comparison of MPI with sockets. For the third edition, example code has been brought up to date; applications have been updated; and references reflect the recent attention MPI has received in the literature. A companion volume, Using Advanced MPI , covers more advanced topics, including hybrid programming and coping with large data.
Quasielastic neutrino scattering is an important aspect of the experimental program to study fundamental neutrino properties including neutrino masses, mixing angles, the mass hierarchy and ...CP-violating phase. Proper interpretation of the experiments requires reliable theoretical calculations of neutrino-nucleus scattering. In this paper we present calculations of response functions and cross sections by neutral-current scattering of neutrinos off $^{12}$C. These calculations are based on realistic treatments of nuclear interactions and currents, the latter including the axial, vector, and vector-axial interference terms crucial for determining the difference between neutrino and anti-neutrino scattering and the CP-violating phase. Here in this paper, we find that the strength and energy-dependence of two-nucleon processes induced by correlation effects and interaction currents are crucial in providing the most accurate description of neutrino-nucleus scattering in the quasielastic regime.
This book offers a practical guide to the advanced features of the MPI (Message-Passing Interface) standard library for writing programs for parallel computers. It covers new features added in MPI-3, ...the latest version of the MPI standard, and updates from MPI-2. Like its companion volume, Using MPI , the book takes an informal, example-driven, tutorial approach. The material in each chapter is organized according to the complexity of the programs used as examples, starting with the simplest example and moving to more complex ones. Using Advanced MPI covers major changes in MPI-3, including changes to remote memory access and one-sided communication that simplify semantics and enable better performance on modern hardware; new features such as nonblocking and neighborhood collectives for greater scalability on large systems; and minor updates to parallel I/O and dynamic processes. It also covers support for hybrid shared-memory/message-passing programming; MPI_Message, which aids in certain types of multithreaded programming; features that handle very large data; an interface that allows the programmer and the developer to access performance data; and a new binding of MPI to Fortran.
Message Passing Interface (MPI) ist ein Protokoll, das parallel Berechnungen auf verteilten, heterogenen, lose-gekoppelten Computersystemen ermöglicht. Das Buch beginnt mit einem kurzen Überblick ...über parallele Entwicklungsumgebungen und führt in die grundlegenden Konzepte ein. Anschließend wird gezeigt, wie anhand von graphischen Analysewerkzeugen die Leistungsfähigkeit eines Programms getestet werden kann. Die grundlegenden Fähigkeiten von MPI werden mittels des Poisson-Problems erörtert und gezeigt, wie MPI zur Umsetzung von virtuellen Topologien genutzt werden kann. Zur Illustration von anspruchsvolleren Funktionen des Message-Passing in MPI wird auf das N-Körper- Problem eingegangen. Nach einem Vergleich von MPI-Implementierungen mit anderen Systemen wird das Buch durch Sprachfestlegungen für C-, C++ und Fortran-Versionen aller MPI-Routinen abgerundet.
In recent years local chiral interactions have been derived and implemented in quantum Monte Carlo methods in order to test to what extent the chiral effective field theory framework impacts our ...knowledge of few- and many-body systems. In this Letter, we present Green's function Monte Carlo calculations of light nuclei based on the family of local two-body interactions presented by our group in a previous paper in conjunction with chiral three-body interactions fitted to bound- and scattering-state observables in the three-nucleon sector. These interactions include Δ intermediate states in their two-pion-exchange components. We obtain predictions for the energy levels and level ordering of nuclei in the mass range A=4-12, accurate to ≤2% of the binding energy, in very satisfactory agreement with experimental data.
Using MPI is a completely up-to-date version of the authors' 1994 introduction to the core functions of MPI. It adds material onthe new C++ and Fortran 90 bindings for MPI throughout the book.
Beowulf clusters, which exploit mass-market PC hardware and software in conjunction with cost-effective commercial network technology, are becoming the platform for many scientific, engineering, and ...commercial applications. With growing popularity has come growing complexity. Addressing that complexity, Beowulf Cluster Computing with Linux and Beowulf Cluster Computing with Windows provide system users and administrators with the tools they need to run the most advanced Beowulf clusters. The book is appearing in both Linux and Windows versions in order to reach the entire PC cluster community, which is divided into two distinct camps according to the node operating system. Each book consists of three stand-alone parts. The first provides an introduction to the underlying hardware technology, assembly, and configuration. The second part offers a detailed presentation of the major parallel programming librairies. The third, and largest, part describes software infrastructures and tools for managing cluster resources. This includes some of the most popular of the software packages available for distributed task scheduling, as well as tools for monitoring and administering system resources and user accounts. Approximately 75% of the material in the two books is shared, with the other 25% pertaining to the specific operating system. Most of the chapters include text specific to the operating system. The Linux volume includes a discussion of parallel file systems.