We present a new version 3.2 of the LanHEP software package. New features of the program include tools for the models with extra dimensions, implementation of the particle classes for FeynArts ...output, using templates with LanHEP statements, color sextet particles and new substitution techniques which allow to define new routines.
Program title: LanHEP
Catalogue identifier: AECH_v2_0
Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECH_v2_0.html
Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland
Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html
No. of lines in distributed program, including test data, etc.: 92232
No. of bytes in distributed program, including test data, etc.: 469861
Distribution format: tar.gz
Programming language: C.
Computer: PC.
Operating system: Linux.
RAM: 2MB (SM) - 12MB (MSSM) - 120MB (MSSM with counterterms)
Classification: 4.4.
Catalogue identifier of previous version: AECH_v1_0
Journal reference of previous version: Comput. Phys. Comm. 180(2009)431
Does the new version supersede the previous version?: Yes
Nature of problem: Deriving Feynman rules from the Lagrangian
Solution method: The program reads the Lagrangian written in a compact form, close to the one used in publications. It means that Lagrangian terms can be written with summation over indices of broken symmetries and using special symbols for complicated expressions, such as covariant derivative and strength tensor for gauge fields. Tools for checking the correctness of the model, and for simplifying the output expressions are provided. The output is Feynman rules in terms of physical fields and independent parameters in the form of CompHEP or CalcHEP model files, which allows one to start calculations of processes in the new physical model. Alternatively, Feynman rules can be generated in FeynArts format, or as LaTeX table.
Reasons for new version: New features make description of the new models simpler and allow to create models with extra dimensions and color sextets particles.
Summary of revisions:•Tools for the models with extra dimensions.•The particle classes for FeynArts output.•Templates for the LanHEP statements.•Color sextet particles.•New substitution techniques which allow to define new routines.
Running time: 1sec (SM) - 8sec (MSSM) - 8min (MSSM with counterterms)
The free-space transfer of high-fidelity optical signals between remote locations has many applications, including both classical and quantum communication, precision navigation, clock ...synchronization, etc. The physical processes that contribute to signal fading and loss need to be carefully analyzed in the theory of light propagation through the atmospheric turbulence. Here we derive the probability distribution for the atmospheric transmittance including beam wandering, beam shape deformation, and beam-broadening effects. Our model, referred to as the elliptic beam approximation, applies to weak, weak-to-moderate, and strong turbulence and hence to the most important regimes in atmospheric communication scenarios.
Abstract
The aim of this article is to provide a methodology useful for designing and controlling elastic computing self-organizing for artificial intelligence space exploration. The artificial ...intelligence application itself should be elastic and distributed in the context of limited information technologies resources in space. The most important use of elastic computing is artificial intelligence’s ability to continually learn and adapt to evolving environments and goals. The conceptual framework uses elastic infrastructure model and the terminology of graph dynamical systems to be able to capture a broad variety of processes taking place on self-organizing networks. The methodology’s uniqueness lies in the theory of graph dynamical systems used to explain the self-organizing processes life cycle.
The recent overall Northern Hemisphere warming was accompanied by several severe northern continental winters, as for example, extremely cold winter 2005–2006 in Europe and northern Asia. Here we ...show that anomalous decrease of wintertime sea ice concentration in the Barents‐Kara (B‐K) seas could bring about extreme cold events like winter 2005–2006. Our simulations with the ECHAM5 general circulation model demonstrate that lower‐troposphere heating over the B‐K seas in the Eastern Arctic caused by the sea ice reduction may result in strong anticyclonic anomaly over the Polar Ocean and anomalous easterly advection over northern continents. This causes a continental‐scale winter cooling reaching −1.5°C, with more than 3 times increased probability of cold winter extremes over large areas including Europe. Our results imply that several recent severe winters do not conflict the global warming picture but rather supplement it, being in qualitative agreement with the simulated large‐scale atmospheric circulation realignment. Furthermore, our results suggest that high‐latitude atmospheric circulation response to the B‐K sea ice decrease is highly nonlinear and characterized by transition from anomalous cyclonic circulation to anticyclonic one and then back again to cyclonic type of circulation as the B‐K sea ice concentration gradually reduces from 100% to ice free conditions. We present a conceptual model that may explain the nonlinear local atmospheric response in the B‐K seas region by counter play between convection over the surface heat source and baroclinic effect due to modified temperature gradients in the vicinity of the heating area.
To deliver food security for the 9 billon population in 2050, a 70% increase in world food supply will be required. Projected climatic and environmental changes emphasize the need for breeding ...strategies that delivers both a substantial increase in yield potential and resilience to extreme weather events such as heat waves, late frost, and drought. Heat stress around sensitive stages of wheat development has been identified as a possible threat to wheat production in Europe. However, no estimates have been made to assess yield losses due to increased frequency and magnitude of heat stress under climate change. Using existing experimental data, the Sirius wheat model was refined by incorporating the effects of extreme temperature during flowering and grain filling on accelerated leaf senescence, grain number, and grain weight. This allowed us, for the first time, to quantify yield losses resulting from heat stress under climate change. The model was used to optimize wheat ideotypes for CMIP5-based climate scenarios for 2050 at six sites in Europe with diverse climates. The yield potential for heat-tolerant ideotypes can be substantially increased in the future (e.g. by 80% at Seville, 100% at Debrecen) compared with the current cultivars by selecting an optimal combination of wheat traits, e.g. optimal phenology and extended duration of grain filling. However, at two sites, Seville and Debrecen, the grain yields of heat-sensitive ideotypes were substantially lower (by 54% and 16%) and more variable compared with heat-tolerant ideotypes, because the extended grain filling required for the increased yield potential was in conflict with episodes of high temperature during flowering and grain filling. Despite much earlier flowering at these sites, the risk of heat stress affecting yields of heat-sensitive ideotypes remained high. Therefore, heat tolerance in wheat is likely to become a key trait for increased yield potential and yield stability in southern Europe in the future.
We present a model that explains why galaxies form stars on a timescale significantly longer than the timescales of processes governing the evolution of interstellar gas. We show that gas evolves ...from a non-star-forming to a star-forming state on a relatively short timescale, and thus the rate of this evolution does not limit the star formation rate (SFR). Instead, the SFR is limited because only a small fraction of star-forming gas is converted into stars before star-forming regions are dispersed by feedback and dynamical processes. Thus, gas cycles into and out of a star-forming state multiple times, which results in a long timescale on which galaxies convert gas into stars. Our model does not rely on the assumption of equilibrium and can be used to interpret trends of depletion times with the properties of observed galaxies and the parameters of star formation and feedback recipes in simulations. In particular, the model explains how feedback self-regulates the SFR in simulations and makes it insensitive to the local star formation efficiency. We illustrate our model using the results of an isolated L*-sized galaxy simulation that reproduces the observed Kennicutt-Schmidt relation for both molecular and atomic gas. Interestingly, the relation for molecular gas is almost linear on kiloparsec scales, although a nonlinear relation is adopted in simulation cells. We discuss how a linear relation emerges from non-self-similar scaling of the gas density PDF with the average gas surface density.
micrOMEGAs is a code to compute dark matter observables in generic extensions of the standard model. This version of micrOMEGAs includes a generalization of the Boltzmann equations to take into ...account the possibility of two dark matter candidates. The modification of the relic density calculation to include interactions between the two dark matter sectors as well as semi-annihilation is presented. Both dark matter signals in direct and indirect detection are computed as well. An extension of the standard model with two scalar doublets and a singlet is used as an example.
Program title: MicrOMEGAs4.1
Catalogue identifier: ADQR_v4_0
Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADQR_v4_0.html
Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland
Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html
No. of lines in distributed program, including test data, etc.: 738425
No. of bytes in distributed program, including test data, etc.: 9807620
Distribution format: tar.gz
Programming language: C and Fortran.
Computer: PC, Mac.
Operating system: UNIX (Linux, Darwin).
RAM: 50MB depending on the number of processes required.
Classification: 1.9, 11.6.
Catalogue identifier of previous version: ADQR_v3.0
Journal reference of previous version: Comput. Phys. Comm. 185 (2014) 960
External routines: CalcHEP, SuSpect, NMSSMTools, CPSuperH, LoopTools, HiggsBounds
Does the new version supersede the previous version?: Yes
Nature of problem: Calculation of the relic density and direct and indirect detection rates of the lightest stable particle in particle physics models with at most two stable dark matter candidates.
Solution method: In the case where the two dark matter particles have very different masses, we find that the equations for the evolution of the density of dark matter behave as stiff equations. To solve these we use the backward scheme and the Rosenbrock algorithm. The standard solution based on the Runge–Kutta method is still used for models with only one dark matter candidate.
Reasons for new version: There are many experiments that are currently searching for the remnants of dark matter annihilation and the relic density is determined precisely from cosmological measurements. In this version we generalize the Boltzmann equations to take into account the possibility of two dark matter candidates. Thus, in solving for the relic density we include interactions between the two dark matter sectors as well as semi-annihilation. The dark matter signals in direct and indirect detection are computed as well.
Summary of revisions:•Generalization of the Boltzmann equations to include two dark matter candidates, their interactions and semi-annihilations, the relative density of the two dark matter components is taken into account when computing direct/indirect detection rates.•Upgrade of the numerical method for solving the Boltzmann equations.•Include sample extensions of the standard model with extra doublet and singlets which contain two stable neutral particles.Unusual features: Depending on the parameters of the model, the program generates additional new code, compiles it and loads it dynamically.
Running time: 4 sec
micrOMEGAs is a code to compute dark matter observables in generic extensions of the standard model. This new version of micrOMEGAs is a major update which includes a generalization of the ...Boltzmann equations to accommodate models with asymmetric dark matter or with semi-annihilation and a first approach to a generalization of the thermodynamics of the Universe in the relic density computation. Furthermore a switch to include virtual vector bosons in the final states in the annihilation cross sections or relic density computations is added. Effective operators to describe loop-induced couplings of Higgses to two-photons or two-gluons are introduced and reduced couplings of the Higgs are provided allowing for a direct comparison with recent LHC results. A module that computes the signature of DM captured in celestial bodies in neutrino telescopes is also provided. Moreover the direct detection module has been improved as concerns the implementation of the strange “content” of the nucleon. New extensions of the standard model are included in the distribution.
Title of program: micrOMEGAs3.
Program obtainable from:http://lapth.cnrs.fr/micromegas
Computers for which the program is designed and others on which it has been tested: PC, Mac
Operating systems under which the program has been tested : UNIX (Linux, Darwin)
Programming language used: C and Fortran
Memory required to execute with typical data: 50 MB depending on the number of processes required.
No. of processors used: 1
Has the code been vectorized or parallelized: no
No. of bytes in distributed program, including test data: 70736 kB
External routines/libraries used: no
CPC Program Library subprograms used: CalcHEP, SuSpect, NMSSMTools, CPSuperH, LoopTools, HiggsBounds
Catalogue identifier of previous version: ADQR_v1_3
Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 842
Does the new version supersede the previous version: yes
Nature of physical problem: Calculation of the relic density and direct and indirect detection rates of the lightest stable particle in a generic new model of particle physics.
Method of solution: In numerically solving the evolution equation for the density of dark matter, relativistic formulae for the thermal average are used. All tree-level processes for annihilation and coannihilation of new particles in the model are included as well as some 3-body final states. The cross-sections for all processes are calculated exactly with CalcHEP after definition of a model file. The propagation of the charged cosmic rays is solved within a semi-analytical two-zone model.
Reasons for the new version: There are many experiments that are currently searching for the remnants of dark matter annihilation and the relic density is determined precisely from cosmological measurements. In this version we add the computation of dark matter signals in neutrino telescopes, we generalize the Boltzmann equations so as to take into account a larger class of dark matter models and improve the precision in the prediction of the relic density for DM masses that are below the W mass. We compute the signal strength for Higgs production in different channels to compare with the results of the LHC.
Summary of revisions:•Generalization of the Boltzmann equations to include asymmetric dark matter and semi-annihilations: the DM asymmetry is taken into account when computing direct/indirect detection rates.•Incorporating loop-induced decays of Higgs particles to two-photons and two-gluons, and computing the signal strength for Higgs production in various channels that can be compared to results from LHC searches.•New module for neutrino signature from DM capture in the Sun and the Earth•Annihilation cross sections for some selected 3-body processes in addition to the 2-body tree-level processes. The 3-body option can be included in the computation of the relic density and/or for annihilation of dark matter in the galaxy.•Possibility of using different tables for the effective degrees of freedom in the early Universe•Annihilation cross sections for the loop induced processes γγ and γZ0 in the NMSSM and the CPVMSSM•New function for incorporating DM clumps•New function to define the strange quark content of the nucleon•The LanHEP source code for new models is included•New models with scalar DM are included (Inert doublet model and model with Z3 symmetry)•New implementation of the NMSSM which uses the Higgs self-couplings and the particle spectrum calculated in NMSSMTools_4.1•New versions of spectrum generators used in the MSSM (Suspect_2.4.1) and in the CPVMSSM (CPsuperH2.3)•Extended routines for flavor physics in the MSSM•New facilities to compute DM observables independently of the model•Update in interface tools to read files produced by other codes, this allows easy interface to other codesTypical running time: 4 s
Unusual features of the program: Depending on the parameters of the model, the program generates additional new code, compiles it and loads it dynamically.
ABSTRACT We present a study of a star formation prescription in which star formation efficiency (SFE) depends on local gas density and turbulent velocity dispersion, as suggested by direct ...simulations of SF in turbulent giant molecular clouds (GMCs). We test the model using a simulation of an isolated Milky-Way-sized galaxy with a self-consistent treatment of turbulence on unresolved scales. We show that this prescription predicts a wide variation of local SFE per free-fall time, ∼ 0.1%-10%, and gas depletion time, ∼ 0.1-10 Gyr. In addition, it predicts an effective density threshold for star formation due to suppression of in warm diffuse gas stabilized by thermal pressure. We show that the model predicts star formation rates (SFRs) in agreement with observations from the scales of individual star-forming regions to the kiloparsec scales. This agreement is nontrivial, as the model was not tuned in any way and the predicted SFRs on all scales are determined by the distribution of the GMC-scale densities and turbulent velocities in the cold gas within the galaxy, which is shaped by galactic dynamics. The broad agreement of the star formation prescription calibrated in the GMC-scale simulations with observations both gives credence to such simulations and promises to put star formation modeling in galaxy formation simulations on a much firmer theoretical footing.
Two-dimensional Ising superconductivity formed in NbSe
2
, MoS
2
, WS
2
, etc. transition-metal dichalcogenides is considered. For the superconducting state, the effective low-energy action for ...phases of the order parameters has been obtained and collective modes in the system have been studied. It has been shown that the system contains not only the Goldstone mode but also the Leggett mode with a mass related to the difference between the singlet and triplet pairing constants. The effect of a low magnetic field parallel to the plane of the system has also been discussed.