HiggsSignals is a Fortran90 computer code that allows to test the compatibility of Higgs sector predictions against Higgs rates and masses measured at the LHC or the Tevatron. Arbitrary models with ...any number of Higgs bosons can be investigated using a model-independent input scheme based on HiggsBounds. The test is based on the calculation of a
χ
2
measure from the predictions and the measured Higgs rates and masses, with the ability of fully taking into account systematics and correlations for the signal rate predictions, luminosity and Higgs mass predictions. It features two complementary methods for the test. First, the peak-centered method, in which each observable is defined by a Higgs signal rate measured at a specific hypothetical Higgs mass, corresponding to a tentative Higgs signal. Second, the mass-centered method, where the test is evaluated by comparing the signal rate measurement to the theory prediction at the Higgs mass predicted by the model. The program allows for the simultaneous use of both methods, which is useful in testing models with multiple Higgs bosons. The code automatically combines the signal rates of multiple Higgs bosons if their signals cannot be resolved by the experimental analysis. We compare results obtained with HiggsSignals to official ATLAS and CMS results for various examples of Higgs property determinations and find very good agreement. A few examples of HiggsSignals applications are provided, going beyond the scenarios investigated by the LHC collaborations. For models with more than one Higgs boson we recommend to use HiggsSignals and HiggsBounds in parallel to exploit the full constraining power of Higgs search exclusion limits and the measurements of the signal seen at
m
H
≈
125.5
GeV.
GenASiSBasics provides Fortran 2003 classes furnishing extensible object-oriented utilitarian functionality for large-scale physics simulations on distributed memory supercomputers. This ...functionality includes physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. This revision–Version 2 of Basics–makes mostly minor additions to functionality and includes some simplifying name changes.
Program Title: SineWaveAdvection, SawtoothWaveAdvection, and RiemannProblem (fluid dynamics example problems illustrating GenASiSBasics); ArgonEquilibrium and ClusterFormation (molecular dynamics example problems illustrating GenASiSBasics)
Program Files doi:http://dx.doi.org/10.17632/6w9ygpygmc.1
Licensing provisions: GPLv3
Programming language: Fortran 2003 (tested with gfortran 6.1.0, Intel Fortran 16.0.3, Cray Compiler 8.5.3)
Journal reference of previous version: Computer Physics Communications, 196 (2015) 506
Does the new version supersede the previous version?: Yes
Reasons for the new version: This version makes mostly minor additions to functionality and includes some simplifying name changes.
Summary of revisions: Several additions to functionality are minor. Two new singleton objects are KIND_SMALL and KIND_TINY, for smaller sized numbers than those specified by the previously available KIND_DEFAULT. The class MeasuredValueForm can now handle some more complicated cases of unit string processing. The numerical values in the CONSTANT singleton have been updated to 2016 values 3, and CONSTANT and UNIT contain a few additional members.
A new class TimerForm can be used to track the wall time occupied by various segments of code. The PROGRAM_HEADER singleton now contains an array member of this new class. With calls like Display omitted the user can initialize their own timers; on return iMyTimer contains the index of the newly initialized timer. The calls Display omitted and Display omitted should surround the block of code to be timed. The information displayed by calling the ShowStatistics method of PROGRAM_HEADER includes data from all initialized timers, including one for overall execution time which is present by default.
The code now expects to be compiled with OpenMP, typically by applying compiler flags. Strictly speaking this is only required for the PROGRAM_HEADER singleton, which queries the number of threads via a library call. In GenASiSBasics, OpenMP directives (which appear as comments as far as the Fortran 2003 standard is concerned) are only used in the Clear and Copy commands.
There have been a number of name changes, mostly for simplification and consistency. These include the classes in ArrayArrays, where for example ArrayInteger_1D_Form is now simply Integer_1D_Form. Similar streamlining changes have been made to MessagePassing classes: IncomingMessageArrayRealForm is now MessageIncoming_1D_R_Form, for instance. The class VariableGroupArrayMetadata is now VariableGroup_1D_Form. The name ParametersStreamForm has been changed by one character (deletion of an s) to ParameterStreamForm. The member Selected of VariableGroupForm has been changed to iaSelected, where the prefix ia is a conventional prefix we use for an array of array indices.
The interface and functionality of the SetGrid member of StructuredGridImageForm have been modified so as not to include boundary cells exterior to the computational domain, which prevented display of the computational domain in 3D plots with VisIt 4 unless a Box operator was applied. See the fluid dynamics examples for the modified usage.
Finally, version 4.10 of the Silo library 2 introduced an include file named silo_f9x.inc, which the FileSystem classes of GenASiSBasics now expect to be available instead of silo.inc.
Nature of problem: By way of illustrating GenASiSBasics functionality, solve example fluid dynamics and molecular dynamics problems.
Solution method: For fluid dynamics examples, finite-volume. For molecular dynamics examples, leapfrog and velocity-Verlet integration.
External routines/libraries: MPI 1 and Silo 2
Additional comments including Restrictions and Unusual features:
The example problems named above are not ends in themselves, but serve to illustrate our object-oriented approach and the functionality available though GenASiSBasics. In addition to these more substantial examples, we provide individual unit test programs for each of the classes comprised by GenASiSBasics.
GenASiSBasics is available in the CPC Program Library and also at https://github.com/GenASiS. 1http://www.mcs.anl.gov/mpi/2https://wci.llnl.gov/simulation/computer-codes/silo3C. Patrignani et al. (Particle Data Group), Chin. Phys. C 40 (2016) 1000014https://wci.llnl.gov/simulation/computer-codes/visit
This paper reports the release of PathSum, a new software suite of state-of-the-art path integral methods for studying the dynamics of single or extended systems coupled to harmonic environments. The ...package includes two modules, suitable for system-bath problems and extended systems comprising many coupled system-bath units, and is offered in C++ and Fortran implementations. The system-bath module offers the recently developed small matrix path integral (SMatPI) and the well-established iterative quasi-adiabatic propagator path integral (i-QuAPI) method for iteration of the reduced density matrix of the system. In the SMatPI module, the dynamics within the entanglement interval can be computed using QuAPI, the blip sum, time evolving matrix product operators, or the quantum-classical path integral method. These methods have distinct convergence characteristics and their combination allows a user to access a variety of regimes. The extended system module provides the user with two algorithms of the modular path integral method, applicable to quantum spin chains or excitonic molecular aggregates. An overview of the methods and code structure is provided, along with guidance on method selection and representative examples.
The DDEC6 method is one of the most accurate and broadly applicable atomic population analysis methods. It works for a broad range of periodic and non-periodic materials with no magnetism, collinear ...magnetism, and non-collinear magnetism irrespective of the basis set type. First, we show DDEC6 charge partitioning to assign net atomic charges corresponds to solving a series of 14 Lagrangians in order. Then, we provide flow diagrams for overall DDEC6 analysis, spin partitioning, and bond order calculations. We wrote an OpenMP parallelized Fortran code to provide efficient computations. We show that by storing large arrays as shared variables in cache line friendly order, memory requirements are independent of the number of parallel computing cores and false sharing is minimized. We show that both total memory required and the computational time scale linearly with increasing numbers of atoms in the unit cell. Using the presently chosen uniform grids, computational times of ∼9 to 94 seconds per atom were required to perform DDEC6 analysis on a single computing core in an Intel Xeon E5 multi-processor unit. Parallelization efficiencies were usually >50% for computations performed on 2 to 16 cores of a cache coherent node. As examples we study a B-DNA decamer, nickel metal, supercells of hexagonal ice crystals, six X@C
endohedral fullerene complexes, a water dimer, a Mn
-acetate single molecule magnet exhibiting collinear magnetism, a Fe
O
N
C
H
single molecule magnet exhibiting non-collinear magnetism, and several spin states of an ozone molecule. Efficient parallel computation was achieved for systems containing as few as one and as many as >8000 atoms in a unit cell. We varied many calculation factors (
, grid spacing, code design, thread arrangement,
) and report their effects on calculation speed and precision. We make recommendations for excellent performance.
Quantum electrodynamics and electroweak corrections are important ingredients for many theoretical predictions at the LHC. This paper documents APFEL, a new PDF evolution package that allows for the ...first time to perform DGLAP evolution up to NNLO in QCD and to LO in QED, in the variable-flavor-number scheme and with either pole or MS¯ heavy quark masses. APFEL consistently accounts for the QED corrections to the evolution of quark and gluon PDFs and for the contribution from the photon PDF in the proton. The coupled QCD⊗QED equations are solved in x-space by means of higher order interpolation, followed by Runge–Kutta solution of the resulting discretized evolution equations. APFEL is based on an innovative and flexible methodology for the sequential solution of the QCD and QED evolution equations and their combination. In addition to PDF evolution, APFEL provides a module that computes Deep-Inelastic Scattering structure functions in the FONLL general-mass variable-flavor-number scheme up to O(αs2). All the functionalities of APFEL can be accessed via a Graphical User Interface, supplemented with a variety of plotting tools for PDFs, parton luminosities and structure functions. Written in Fortran 77, APFEL can also be used via the C/C++ and Python interfaces, and is publicly available from the HepForge repository.
Program title: APFEL
Catalogue identifier: AESQ_v1_0
Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AESQ_v1_0.html
Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland
Licensing provisions: GNU General Public License, version 3
No. of lines in distributed program, including test data, etc.: 163479
No. of bytes in distributed program, including test data, etc.: 2164619
Distribution format: tar.gz
Programming language: Fortran 77, C/C++ and Python.
Computer: All.
Operating system: All.
RAM: ≤2MB
Classification: 11.6.
External routines: LHAPDF
Nature of problem:
Solution of the unpolarized coupled DGLAP evolution equations up to NNLO in QCD and to LO in QED in the variable-flavor-number scheme, both with pole and with MSbar masses.
Solution method:
Representation of parton distributions and splitting functions on a grid in x, discretization of DGLAP evolution equations and higher-order interpolation for general values of x, numerical solution of the resulting discretized evolution equations using Runge–Kutta methods.
Restrictions:
Smoothness of the initial conditions for the PDF evolution.
Running time:
A few seconds for initialization, then ∼0.5s for the generation of the PDF tables with combined QCD⊗QED evolution (on a Intel(R) Core(TM)2 Duo CPU E6750 @ 2.66 GHz).
Cython: The Best of Both Worlds Behnel, Stefan; Bradshaw, Robert; Citro, Craig ...
Computing in science & engineering,
2011-March-April, 2011-03-00, 20110301, Letnik:
13, Številka:
2
Journal Article
Recenzirano
Cython is a Python language extension that allows explicit type declarations and is compiled directly to C. As such, it addresses Python's large overhead for numerical loops and the difficulty of ...efficiently using existing C and Fortran code, which Cython can interact with natively.
GenASiSBasics provides modern Fortran classes furnishing extensible object-oriented utilitarian functionality for large-scale physics simulations on distributed memory supercomputers. This ...functionality includes physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. This revision—Version 4 of Basics—includes a name change and additions to functionality, including the facilitation of direct communication between GPUs.
Program title:SineWaveAdvection, SawtoothWaveAdvection, and RiemannProblem (fluid dynamics example problems illustrating GenASiSBasics); ArgonEquilibrium and ClusterFormation (molecular dynamics example problems illustrating GenASiSBasics)
CPC Library link to program files:https://doi.org/10.17632/6w9ygpygmc.3
Developer's repository link:https://github.com/GenASiS
Code Ocean capsule:https://codeocean.com/capsule/9737716
Licensing provisions: GPLv3
Programming language: Modern Fortran; OpenMP (tested with recent versions of GNU Compiler Collection (GCC), Cray Compiler Environment (CCE), IBM XL Fortran compiler)
Journal reference of previous version: Comput. Phys. Commun. 244 (2019) 483
Does the new version supersede the previous version?: Yes
Nature of problem: By way of illustrating GenASiSBasics functionality, solve example fluid dynamics and molecular dynamics problems.
Solution method: For fluid dynamics examples, finite-volume. For molecular dynamics examples, leapfrog and velocity-Verlet integration.
Reasons for new version: This version includes a significant name change, some minor additions to functionality, and two major additions to functionality: support for systems using AMD GPUs and infrastructure facilitating GPU-aware MPI communications.
Summary of revisions: The CONSTANT singleton has been updated to 2022 values 1.
The class MeasuredValueForm—a class for handling numbers with labels to provide means of dealing with units—has been renamed QuantityForm.
An AddCommand and MultiplyAddCommand have been added to the ArrayOperations division of the code.
The Real_1D_Form and Real_3D_Form classes, used to construct “ragged arrays,” now have AllocateDevice ( ) methods to provide mirror allocation of GPU memory.
Show_Command now has an option to allow the display of more digits for integer and real numbers.
In the CurveImageForm and StructuredGridImageForm classes used for I/O, the SetGrid and SetReadAttributes methods have been replaced by SetGridWrite and SetGridRead respectively. An optional flag StorageOnlyOption of their Read methods provides streamlined data input that assumes the data being read conforms to the grid resolution and domain decomposition of the currently running program.
In the PROGRAM_HEADER singleton, the method RecordStatistics replaces ShowStatistics for recording memory usage and timers. The recording of these data has been refactored and streamlined. Memory usage statistics are now available on macOS. In order to facilitate organic ordering of timer data corresponding the order they are encountered in the code (so as to avoid the necessity of hard-coded timer setup routines), the AddTimer method has been deleted, and the Timer method returns a pointer to either an existing instance of TimerForm or a new one, if it does not yet exist. WARNING: It is important to initialize timer handle variables to zero (or a negative value) in order for the code to recognize that a new timer needs to be created, and to avoid spurious handle values.
GPU-aware MPI communications—passing GPU memory addresses directly to MPI routines—is now supported by the MessagePassing classes (see the original and Version 2 updates of this article for more detailed descriptions of the MessagePassing classes). A new method AllocateDevice ( ) has been added to these classes to activate this feature. When the communication buffers are allocated in an instantiation of the class, AllocateDevice ( ) creates a mirror allocation of the buffers on the GPU. When associations with pre-existing arrays are used as the communication buffers with the class instantiation, AllocateDevice ( ) deduces the GPU memory addresses associated with these buffers to be used for future MPI communications.
The example fluid dynamics problem RiemannProblem included in this release has been modified to illustrate the use of GPU-aware MPI. In the DistributedMeshForm class, a call to the AllocateDevice ( ) method is made for the instances of the MessageIncoming_* and the MessageOutgoing_* classes when GPU offload is enabled (see 2 and Version 3 of this article for more detailed descriptions of GPU offload in GenASiS). The use of GPU-aware communication can be explicitly turned on or off using a command-line argument DevicesCommunicate=T or DevicesCommunicate=F, respectively. On the Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF) 3, exploiting GPU-aware communications yields over 20% speedups for RiemannProblem due to the avoidance of explicit GPU-memory to CPU-memory data movement for MPI communications.
The example program RiemannProblem has been modified such that the use of GPU offload can be controlled by a command-line argument UseDevice=T,F when the executable is built with OpenMP offload support. For example, on OLCF Summit, the following commands build and execute the three-dimensional RiemannProblem with 5123 cells three times with eight MPI processes. The first one, by default, uses GPU offload and GPU-aware communications. The second run uses MPI communications on the host. And finally the third run uses OpenMP threading on the CPU by disabling GPU offload, which also automatically disables GPU-aware communications. Display omitted Finally, this revision adds support for AMD GPUs and other accelerators supported by the HIP programming model 4 as provided by the new file Device_HIP.c under the directory Modules/ Basics/Devices. The functionalities provided here are either not currently available in OpenMP or not yet implemented widely, such as inquiry of GPU memory usage and allocation of host page-locked memory. The use of Device_CUDA.c and Device_HIP.c is mutually exclusive and controlled by the Makefile variables DEVICE_CUDA and DEVICE_HIP, respectively. An example of how this is done can be found in the machine Makefile Makefile_Cray_CCE.
Additional comments including restrictions and unusual features: Uses the MPI 5 and Silo 6 libraries. The example problems named above are not ends in themselves, but serve to illustrate our object-oriented approach and the functionality available though GenASiSBasics. In addition to these more substantial examples, we provide individual unit test programs for the individual classes comprised by GenASiSBasics.
GenASiSBasics is available in the CPC Program Library and also at https://github.com/GenASiS.
1R.L. Workman et al., Particle Data Group, Prog. Theor. Exp. Phys. 2022 (2022) 083C01.2R.D. Budiardja, C.Y. Cardall, Parallel Comput. 88 (2019) 102544.3https://docs.olcf.ornl.gov/systems/summit_user_guide.html.4https://rocmdocs.amd.com/en/latest/Programming_Guides/Programming-Guides.html.5https://www.mpi-forum.org.6https://wci.llnl.gov/simulation/computer-codes/silo.
Based on the OPP technique and the HELAC framework, HELAC-1LOOP is a program that is capable of numerically evaluating QCD virtual corrections to scattering amplitudes. A detailed presentation of the ...algorithm is given, along with instructions to run the code and benchmark results. The program is part of the HELAC-NLO framework that allows for a complete evaluation of QCD NLO corrections.
Program title:HELAC-1LOOP
Catalogue identifier: AEOC_v1_0
Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOC_v1_0.html
Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland
Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html
No. of lines in distributed program, including test data, etc.: 290945
No. of bytes in distributed program, including test data, etc.: 3013326
Distribution format: tar.gz
Programming language: Fortran (gfortran(http://gcc.gnu.org/fortran/), lahey95 (http://www.lahey.com), ifort3(http://software.intel.com)).
Computer: Any.
Operating system: Linux, Unix, Mac OS.
Classification: 11.1.
Nature of problem:
The evaluation of virtual one-loop amplitudes for multi-particle scattering is a long-standing problem 1. In recent years the OPP reduction technique 2 opened the road for a fully numerical approach based on the evaluation of the one-loop amplitude for well-defined values of the loop momentum.
Solution method:
By using HELAC 3–5 and CutTools 6, HELAC-1LOOP is capable of evaluating QCD virtual corrections 7. The one-loop n-particle amplitudes are constructed as part of the n+2 tree-order ones, by using the basic recursive algorithm used in HELAC. A Les Houches Event (LHE) file is produced, combining the complete information from tree-order and virtual one-loop contributions. In conjunction with real corrections, obtained with the use of HELAC-DIPOLES 8, the full NLO corrections can be computed. The program has been successfully used in many applications.
Running time:
Depending on the number of particles and generated events from seconds to days.
References:
1R.K. Ellis, Z. Kunszt, K. Melnikov and G. Zanderighi, arXiv:1105.4319hepph.2G. Ossola, C. G. Papadopoulos and R. Pittau, Nucl. Phys. B 763 (2007) 147 arXiv:hep-ph/0609007.3A. Kanaki and C. G. Papadopoulos, Comput. Phys. Commun. 132 (2000) 306 arXiv:hep-ph/0002082.4C. G. Papadopoulos, Comput. Phys. Commun. 137 (2001) 247 arXiv:hepph/ 0007335.5A. Cafarella, C. G. Papadopoulos and M. Worek, Comput. Phys. Commun. 180 (2009) 1941 arXiv:0710.2427 hep-ph.6G. Ossola, C. G. Papadopoulos and R. Pittau, JHEP 0803 (2008) 042 arXiv:0711.3596 hep-ph.7A. van Hameren, C. G. Papadopoulos and R. Pittau, JHEP 0909, 106 (2009) arXiv:0903.4665 hep-ph.8M. Czakon, C. G. Papadopoulos and M. Worek, JHEP 0908, 085 (2009) arXiv:0905.0883 hep-ph.
Harald Siebert studierte Philosophie in Augsburg, München und an der Sorbonne. In Paris und Berlin promovierte er in Wissenschaftsgeschichte. Er wurde 2007 mit dem Nachwuchspreis der International ...Academy of the History of Science ausgezeichnet.