The spectral analysis of geological and geophysical data has been a fundamental tool in understanding Earth's processes. We present a Fortran 90 library for multitaper spectrum estimation, a ...state-of-the-art method that has been shown to outperform the standard methods. The library goes beyond power spectrum estimation and extracts for the user more information including confidence intervals, diagnostics for single frequency periodicities, and coherence and transfer functions for multivariate problems. In addition, the sine multitaper method can also be implemented. The library presented here provides the tools needed in multiple fields of the Earth sciences for the analysis of data as evident from various examples.
Many of the static and dynamic properties of an atomic Bose–Einstein condensate (BEC) are usually studied by solving the mean-field Gross–Pitaevskii (GP) equation, which is a nonlinear partial ...differential equation for short-range atomic interaction. More recently, BEC of atoms with long-range dipolar atomic interaction are used in theoretical and experimental studies. For dipolar atomic interaction, the GP equation is a partial integro-differential equation, requiring complex algorithm for its numerical solution. Here we present numerical algorithms for both stationary and non-stationary solutions of the full three-dimensional (3D) GP equation for a dipolar BEC, including the contact interaction. We also consider the simplified one- (1D) and two-dimensional (2D) GP equations satisfied by cigar- and disk-shaped dipolar BECs. We employ the split-step Crank–Nicolson method with real- and imaginary-time propagations, respectively, for the numerical solution of the GP equation for dynamic and static properties of a dipolar BEC. The atoms are considered to be polarized along the z axis and we consider ten different cases, e.g., stationary and non-stationary solutions of the GP equation for a dipolar BEC in 1D (along x and z axes), 2D (in x–y and x–z planes), and 3D, and we provide working codes in Fortran 90/95 and C for these ten cases (twenty programs in all). We present numerical results for energy, chemical potential, root-mean-square sizes and density of the dipolar BECs and, where available, compare them with results of other authors and of variational and Thomas–Fermi approximations.
Program title: (i) imag1dZ, (ii) imag1dX, (iii) imag2dXY, (iv) imag2dXZ, (v) imag3d, (vi) real1dZ, (vii) real1dX, (viii) real2dXY, (ix) real2dXZ, (x) real3d
Catalogue identifier: AEWL_v1_0
Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEWL_v1_0.html
Program obtainable from: CPC Program Library, Queens University, Belfast, N. Ireland
Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html
No. of lines in distributed program, including test data, etc.: 111384
No. of bytes in distributed program, including test data, etc.: 604013
Distribution format: tar.gz
Tinker 8: Software Tools for Molecular Design Rackers, Joshua A; Wang, Zhi; Lu, Chao ...
Journal of chemical theory and computation,
10/2018, Letnik:
14, Številka:
10
Journal Article
Recenzirano
Odprti dostop
The Tinker software, currently released as version 8, is a modular molecular mechanics and dynamics package written primarily in a standard, easily portable dialect of Fortran 95 with OpenMP ...extensions. It supports a wide variety of force fields, including polarizable models such as the Atomic Multipole Optimized Energetics for Biomolecular Applications (AMOEBA) force field. The package runs on Linux, macOS, and Windows systems. In addition to canonical Tinker, there are branches, Tinker-HP and Tinker-OpenMM, designed for use on message passing interface (MPI) parallel distributed memory supercomputers and state-of-the-art graphical processing units (GPUs), respectively. The Tinker suite also includes a tightly integrated Java-based graphical user interface called Force Field Explorer (FFE), which provides molecular visualization capabilities as well as the ability to launch and control Tinker calculations.
GenASiSBasics provides Fortran 2003 classes furnishing extensible object-oriented utilitarian functionality for large-scale physics simulations on distributed memory supercomputers. This ...functionality includes physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. This revision – Version 3 of Basics – includes a significant name change, some minor additions to functionality, and a major addition to functionality: infrastructure facilitating the offloading of computational kernels to devices such as GPUs.
Program Title: SineWaveAdvection, SawtoothWaveAdvection, and RiemannProblem (fluid dynamics example problems illustrating GenASiSBasics); ArgonEquilibrium and ClusterFormation (molecular dynamics example problems illustrating GenASiSBasics)
Program Files doi:http://dx.doi.org/10.17632/6w9ygpygmc.2
Licensing provisions: GPLv3
Programming language: Fortran 2003 (tested with GNU Compiler Collection 8.1.0, Intel Fortran Compiler 18.0.3, Cray Compiler Environment 8.6.5, IBM XL Fortran 16.1.0)
Journal reference of previous version: Computer Physics Communications 214 (2017) 247
Does the new version supersede the previous version?: Yes
Reasons for the new version: This version includes a significant name change, some minor additions to functionality, and a major addition to functionality: infrastructure facilitating the offloading of computational kernels to devices such as GPUs.
Summary of revisions:
The class VariableGroupForm – a major workhorse for handling set of related fields – has been renamed StorageForm.
The ability to use unicode characters in standard output has been added, but is currently only supported by the GNU Compiler Collection (GCC). This capability is used to display exponents as numerical superscripts, as well as symbols such as ħ, ⊙, and Å in the display of relevant units. It is made operational by the line Display omitted which is now included in the machine-specific makefile fragments with a GCC suffix in the Build/Machines directory.
There are some changes to units and constants. The geometrized units of past releases (G=c=k=1, with a fundamental unit of meter) have been replaced by natural units (ħ=c=k=1, with MeV as the fundamental unit). Lorentz–Heaviside electromagnetic units are employed (permeability μ=1; no factors of 4π in the Maxwell equations). This refers to numbers as processed internally by the code; as described in the initial release, users can employ the members of the UNIT singleton for input/output purposes, that is, to specify or display numbers with any available units they wish. A number of units have been added, and the specification all units has been put on a more rational basis in keeping with six of the seven standard SI base units (meter, kilogram, second, ampere, kelvin, mole; we have not needed the candela; see 3). Some physical and astrophysical constants have also been added. All constants have been updated to 2018 values 4.
For notifications to standard output, a few tweaks to ignorability levels have been made in various classes. The default output to screen is now less verbose (ignorability INFO_1, our designation for messages of significance just below WARNING).
A couple of additions have been made to MessagePassing: null subcommunicators are accommodated, and an AllToAll_V method has been added to CollectiveOperation_R_Form.
Enhancements to timer functionality have been made. The class TimerForm now has a member Level, which is specified in order to control indentation in screen output. Some functionality has been added to PROGRAM_HEADER_Singleton to work with timers. A method TimerPointer returns a pointer to a timer with a specified Handle (typically a meaningfully named integer). The new members TimerLevel and TimerDisplayFraction of PROGRAM_HEADER_Singleton, which can be set from the command line, can be used to suppress output from timings deemed insignificant, based on timer level or a measured time interval falling below a specified fraction of the total execution time.
The most significant addition in functionality in this release is the addition of infrastructure to offload computational kernels to hardware accelerators such as GPUs using OpenMP device-related directives and runtime library routines in OpenMP 4.5 and later.22https://www.openmp.org/specifications/.This infrastructure, implemented in a new subdivision Devices (see Fig. 1), provides lower-level routines to perform memory management between the host (CPU) and device (GPU) including data allocation, data movement between host and device, and device-to-host memory address association. The routines are implemented as Fortran wrappers to the OpenMP runtime library and CUDA33https://developer.nvidia.com/about-cuda.routines written in C. Additional methods and an option utilizing the lower-level Devices routines have been added to our StorageForm class. They are: UpdateHost() and UpdateDevice() to copy data from device to host and host to device, respectively; AllocateDevice() to allocate memory on the device mirroring the allocation on the host; and PinnedOption as an optional flag to the Initialize() method to allocate the host memory in a page-locked region to facilitate faster data transfer between host and device. A detailed description of the implementation of this functionality can be found in 5.
To deal with different levels of compiler support for device-related OpenMP directives, we use the preprocessor in some source files in Devices to guard against attempted compilation of unsupported features. Preprocessor macro substitution is also utilized in OpenMP directives to switch between multi-threading parallelism on CPUs and offload parallelism to GPUs. Setting the makefile variable ENABLE_OMP_OFFLOAD to 1 – which is the default in the machine-specific makefile Makefile_POWER_XL for the XL compiler on POWER-based supercomputers – sets the appropriate flags and preprocessing to enable compilation for OpenMP offload parallelism. Alternatively, the command Display omitted sets this variable when make is invoked from the command line.
Information regarding the number of devices available to the program, the kind of OpenMP parallelism enabled (i.e. multi-threading or offload), and the selected OpenMP loop scheduling are displayed at runtime by PROGRAM_HEADER_Singleton. When offload parallelism is enabled, the loop scheduling is automatically set to static with chunk-size of 1. With multi-threading parallelism, the schedule defaults to guided but can be overridden at runtime by setting the environment variable OMP_SCHEDULE appropriately.
The example problem RiemannProblem in the Examples directory under the Basics division has been modified to exploit the GPUs using this new functionality. The computational kernels for the problem have been annotated with new OpenMP directives (via the appropriate preprocessor macros) such that they are offloaded to the GPUs when offload parallelism is enabled during compilation. In 5 we demonstrate the weak scaling of this example problem up to 8000 GPUs on the Summit supercomputer at the Oak Ridge Leadership Computing Facility.44https://www.olcf.ornl.gov/olcf-resources/compute-systems/summit/.Figs. 2 and 3 show a visualization of the three-dimensional version of RiemannProblem at 12803 resolution executed with 1000 GPUs.
Nature of problem: By way of illustrating GenASiSBasics functionality, solve example fluid dynamics and molecular dynamics problems.
Solution method: For fluid dynamics examples, finite-volume. For molecular dynamics examples, leapfrog and velocity-Verlet integration.
External routines/libraries: MPI 1 and Silo 2
Additional comments including restrictions and unusual features: The example problems named above are not ends in themselves, but serve to illustrate our object-oriented approach and the functionality available though GenASiSBasics. In addition to these more substantial examples, we provide individual unit test programs for each of the classes comprised by GenASiSBasics.
GenASiSBasics is available in the CPC Program Library and also at https://github.com/GenASiS.
http://www.mcs.anl.gov/mpi/
https://wci.llnl.gov/simulation/computer-codes/silo
https://en.wikipedia.org/wiki/SI_base_unit
M. Tanabashi et al. (Particle Data Group), Phys. Rev. D 98 (2018) 030001
Budiardja, R.D. and Cardall, C.Y., “Targeting GPUs with OpenMP Directives on Summit: A Simple and Effective Fortran Experience,” submitted for publication Parallel Computing: Systems and Applications, arXiv:1812.07977 physics.comp-ph,
This is a thorough, yet understandable text about the boundary element method (BEM), an attractive alternative to the finite element method (FEM). It not only explains the theory, but also deals with ...the implementation into computer code written in FORTRAN 95 (software can be freely downloaded). Applications range from potential problems to static and dynamic problems in elasticity and plasticity. The book also addresses the issue of fast solution of large scale problems, using parallel processing hardware. Special topics such as the treatment of inclusions, heterogeneous domains and changing geometry are also addressed. Most chapters contain exercises and this makes the book suitable for teaching. Applications of the method to industrial problems are shown. The book is designed for engineers and scientists that want to understand how the method works and to apply the method and solve real problems.
This review paper is focused on the application of software in solar drying systems. The application of software is very important to develop and analyze the mathematical models and predicting the ...performance of different kinds of solar drying systems. It is also useful for predicting the crop temperature, moisture content and drying rate, drying kinetics, and color of the crop. Computational fluid dynamics can be used for the analysis and investigation of air flow and temperature distribution pattern through appropriate simulation with the help ANSYS and FLUENT. MATLAB and FORTRAN are very useful tools to develop mathematical models for prediction the crop temperature, air temperature, the moisture evaporated. It is also very useful for training and testing of various models. For statistical data analysis, statistical software SPSS, Sigma Plot V and Statistica. All recent employed software and their utility in solar drying systems are emphasized in this communication. This comprehensive review of the various software applications in different solar drying systems is useful for academician, scientist and researchers.
A comprehensive model was developed to simulate gasification of pine sawdust in the presence of both air and steam. The proposed model improved upon the premise of an existing ASPEN PLUS-based ...biomass gasification model. These enhancements include the addition of a temperature-dependent pyrolysis model, an updated hydrodynamic model, more extensive gasification kinetics and the inclusion of tar formation and reaction kinetics. ASPEN PLUS was similarly used to simulate this process; however, a more extensive FORTRAN subroutine was applied to appropriately model the complexities of a Bubbling Fluidized Bed (“BFB”) gasifier. To confirm validity, the accuracy of the model's predictions was compared with actual experimental results. In addition, the relative accuracy of the comprehensive model was compared to the original base-model to see if any improvement had been made.
Results show that the model predicts H2, CO, CO2, and CH4 composition with reasonable accuracy in varying temperature, steam-to-biomass, and equivalence ratio conditions. Mean error between predicted and experimental results is calculated to range from 6.1% to 37.6%. Highest relative accuracy was obtained in CO composition prediction while the results with the least accuracy were for CH4 and CO2 estimation at changing steam-to-biomass ratios and equivalence ratios. When compared to the original model, the comprehensive model predictions of H2 and CO molar fractions are more accurate than those of CO2 and CH4. For CO2 and CH4, the original model predicted with comparable or better accuracy when varying steam-to-biomass ratio and equivalence ratios but the comprehensive model performed better at varying temperatures.
Display omitted
•A comprehensive kinetic model was developed to include pyrolysis and tar reactions.•The highest accuracy was obtained in CO and H2 compositions predicted by the model.•Reaction temperature had the greatest overall influence on H2 production.•Mass transfer limitations showed a non-negligible effect on the CO/CO2 ratio.•CH4 is highly dependent on biomass decomposition in pyrolysis before gasification.
•High thermal conductivity phase change composite for thermal energy storage.•Hybrid HVAC/thermal energy storage with high-conductivity phase change composite.•Hybrid HVAC-TES for flexible resource ...to shave/shift on-peak power demand.•HVAC efficiency and optimized control strategies for smart grid applications.•HVAC-thermal storage system level modeling and simulation study using Aspen Plus®.
This paper evaluates the use of a phase change composite (PCC) material consisting of paraffin wax (n-Tetradecane) and expanded graphite as a potential storage medium for cold thermal energy storage (TES) systems to support air conditioning applications. The PCC-TES system is proposed to be integrated with the vapor compression refrigeration cycle of an air conditioning (AC) system. The use of this PCC material is novel because of its unique material and thermal characteristics as compared to ice or chilled water that are predominantly used in commercial TES systems for air cooling applications. The work of this paper proposed and tested a hypothesis, which suggests that integrating a conventional AC with a PCC-TES would result in significant benefits concerning compressor size, compressor efficiency, electricity consumed and CO2 emissions. The proposed integration would also contribute to reduce electricity demand during peak hours and reduce necessity to build more expensive power plants and distribution lines. To test the hypothesis, a simulation model in Aspen Plus® software was prepared. However, Aspen Plus® does not have a built-in library to predict PCC’s melting and solidification behaviors. Therefore, an analytical heat transfer model was written as a system of equations in Fortran code into Aspen Plus® calculation block to simulate the phase change behavior and associated characteristics. The overall simulation model, which was designed specifically for this research work, consists of two main parts that communicate with each other. The first part simulates the AC’s refrigeration loop using the built-in Aspen Plus® components and the second part implements the PCC heat transfer model written within the calculation block of Aspen Plus®. The simulation model was validated by crosschecking the calculated results with actual experimental data from an actual 4 kWh PCC-TES benchtop thermal storage system. Very good agreement was observed between the simulations and laboratory data. Simulated performance of the proposed integration between the AC and the PCC-TES indicated the potential to (1) downsize the compressor by 50%, (2) lower electrical consumption by the compressor by 30%, (3) lower CO2 emissions by 30%, and (4) double the compressor efficiency during off and mid peak hours. The present work is a conceptual design and optimization study and does not account for integration inefficiencies, energy losses, real-world operation complexity, and added capital cost of TES integration with AC systems.
PySCF: the Python‐based simulations of chemistry framework Sun, Qiming; Berkelbach, Timothy C.; Blunt, Nick S. ...
Wiley interdisciplinary reviews. Computational molecular science,
January/February 2018, Letnik:
8, Številka:
1
Journal Article
Recenzirano
Odprti dostop
Python‐based simulations of chemistry framework (PySCF) is a general‐purpose electronic structure platform designed from the ground up to emphasize code simplicity, so as to facilitate new method ...development and enable flexible computational workflows. The package provides a wide range of tools to support simulations of finite‐size systems, extended systems with periodic boundary conditions, low‐dimensional periodic systems, and custom Hamiltonians, using mean‐field and post‐mean‐field methods with standard Gaussian basis functions. To ensure ease of extensibility, PySCF uses the Python language to implement almost all of its features, while computationally critical paths are implemented with heavily optimized C routines. Using this combined Python/C implementation, the package is as efficient as the best existing C or Fortran‐based quantum chemistry programs. In this paper, we document the capabilities and design philosophy of the current version of the PySCF package. WIREs Comput Mol Sci 2018, 8:e1340. doi: 10.1002/wcms.1340
This article is categorized under:
Structure and Mechanism > Computational Materials Science
Electronic Structure Theory > Ab Initio Electronic Structure Methods
Software > Quantum Chemistry
The PySCF package provides a Python programming environment to study the electronic structure of molecules and solids.