•We study vulnerability to climate change of 141 nations.•We construct an endogenously weighted composite climate change vulnerability index.•We fail to reject the hypothesis that a component of the ...index can be deleted.•We locate geographically and characterize economically the most vulnerable nations.
The earth’s climate is changing, with global warming attributable to anthropogenic greenhouse gas emissions driven by economic and population growth. Human systems and ecosystems vary in their exposure, mitigation and adaptive capacity, and vulnerability to various forms of climate change. Once mitigation and adaptation efforts have been exhausted, vulnerability remains. Data compiled by the University of Notre Dame covering over 100 nations in 2016 were used to construct a new composite climate change vulnerability index that features endogenously generated weights to aggregate vulnerability indices across six vulnerable sectors. These weights have the potential to inform policy aimed at allocating resources to reduce the cost of limiting vulnerability. The new composite vulnerability index, whose weights differ across sectors and across nations, is compared with the Notre Dame vulnerability index, which uses weights equal across sectors and constant across nations. Although the two indices agree on the identity of the most vulnerable nations, there is a statistically significant difference between the two indices. In addition, a nonparametric statistical test failed to reject the null hypothesis that one sectoral index could be deleted from the composite index without significant loss of information. This also has potentially important policy implications.
Display omitted
•Alloyed NiCo catalysts outperform monometallic Ni and Co in activity and stability.•Ni is more selective than Co towards CH4 cracking and CO2 activation.•Co is more selective than Ni ...toward the Boudouard Reaction.
Nickel and cobalt-based catalysts are very promising candidates for the dry reforming of methane, but their role in regulating catalyst selectivity has thus far been overlooked. Here we report the significant impact that catalyst selectivity has on the performance of Ni and Co catalysts for the dry reforming of methane. The role of Ni and Co in defining catalyst selectivity was examined via in-depth investigations into the tendency of catalysts towards the Boudouard, methane cracking and reverse water gas shift side reactions. Decoupling the side reactions demonstrated that Co deposits have a high affinity for the removal of carbon species via oxidation, while Ni is more active towards CH4 decomposition. Thereby superior catalytic performance is accessed via a combination of the high activity of Ni towards CH4 with the stabilising effect and carbon-resistance of Co. Characterisation of the materials illustrated the formation of well-dispersed NiCo alloys on the FSP-alumina, with the resulting strong bimetallic interactions fuelling enhanced catalytic performance via suppression of catalyst selectivity of Co towards the Boudouard reaction and Ni towards methane cracking.
Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match ...the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.
•We consider productivity measurement with a single constant input.•We show that Malmquist and Hicks–Moorsteen productivity indices coincide.•We show that orientation of productivity measurement is ...irrelevant.•We show that productivity change decomposes uniquely into technical efficiency change and technical change.•We derive an aggregate productivity index from individual productivity indices.
We consider productivity measurement based on radial DEA models with a single constant input. We show that in this case the Malmquist and the Hicks–Moorsteen productivity indices coincide and are multiplicatively complete, the choice of orientation of the Malmquist index for the measurement of productivity change does not matter, and there is a unique decomposition of productivity change containing two independent sources, namely technical efficiency change and technical change. Technical change decomposes in an infinite number of ways into a radial magnitude effect and an output bias effect. We also show that the aggregate productivity index is given by the geometric mean between any two periods of the simple arithmetic averages of the individual contemporaneous and mixed period distance functions.
This paper presents a survey and a comparative evaluation of recent techniques for moving cast shadow detection. We identify shadow removal as a critical step for improving object detection and ...tracking. The survey covers methods published during the last decade, and places them in a feature-based taxonomy comprised of four categories: chromacity, physical, geometry and textures. A selection of prominent methods across the categories is compared in terms of quantitative performance measures (shadow detection and discrimination rates, colour desaturation) as well as qualitative observations. Furthermore, we propose the use of tracking performance as an unbiased approach for determining the practical usefulness of shadow detection methods.
The evaluation indicates that all shadow detection approaches make different contributions and all have individual strength and weaknesses. Out of the selected methods, the geometry-based technique has strict assumptions and is not generalisable to various environments, but it is a straightforward choice when the objects of interest are easy to model and their shadows have different orientation. The chromacity based method is the fastest to implement and run, but it is sensitive to noise and less effective in low saturated scenes. The physical method improves upon the accuracy of the chromacity method by adapting to local shadow models, but fails when the spectral properties of the objects are similar to that of the background. The small-region texture based method is especially robust for pixels whose neighbourhood is textured, but may take longer to implement and is the most computationally expensive. The large-region texture based method produces the most accurate results, but has a significant computational load due to its multiple processing steps.
► Survey of literature on shadow detection, covering methods in the last decade. ► Methods placed in a feature-based taxonomy: chromacity, physical, geometry & textures. ► Thorough quantitative & qualitative evaluation of methods across the taxonomy. ► Use of object tracking accuracy to determine practical usefulness of shadow detection. ► Open source C++ code for the implemented shadow detection methods.
Previous studies of the so-called frontier production function have not utilized an adequate characterization of the disturbance term for such a model. In this paper we provide an appropriate ...specification, by defining the disturbance term as the sum of symmetric normal and (negative) half-normal random variables. Various aspects of maximum-likelihood estimation for the coefficients of a production function with an additive disturbance term of this sort are then considered.
•An efficiency measure for the proportional directional distance function is derived.•It is related to the output-oriented radial efficiency measurement model.•The standard Malmquist is written as ...the ratio of output and input quantity indices.
A natural multiplicative efficiency measure for the Constant Returns to Scale proportional directional distance function (pDDF) is derived, relating its associated linear program to that of the well-known output-oriented radial efficiency measurement model. Based on this relationship, a traditional CCD (Caves, Christensen and Diewert) Malmquist index is introduced to show that, when it is based on the new efficiency measure associated with the pDDF, rather than on a radial efficiency measure associated with an oriented distance function, it becomes a Total Factor Productivity (TFP) index. This constitutes a new result, because heretofore the traditional CCD Malmquist index has not been considered a TFP index. Additionally, a new decomposition of the CCD Malmquist index is proposed that expresses productivity change as the ratio of two components, productivity change due to output change in the numerator and productivity change due to input change in the denominator. In an Appendix the efficiency measure is extended to include any returns to scale pDDF.
This paper describes the current update on macromolecular model validation services that are provided at the MolProbity website, emphasizing changes and additions since the previous review in 2010. ...There have been many infrastructure improvements, including rewrite of previous Java utilities to now use existing or newly written Python utilities in the open‐source CCTBX portion of the Phenix software system. This improves long‐term maintainability and enhances the thorough integration of MolProbity‐style validation within Phenix. There is now a complete MolProbity mirror site at http://molprobity.manchester.ac.uk. GitHub serves our open‐source code, reference datasets, and the resulting multi‐dimensional distributions that define most validation criteria. Coordinate output after Asn/Gln/His “flip” correction is now more idealized, since the post‐refinement step has apparently often been skipped in the past. Two distinct sets of heavy‐atom‐to‐hydrogen distances and accompanying van der Waals radii have been researched and improved in accuracy, one for the electron‐cloud‐center positions suitable for X‐ray crystallography and one for nuclear positions. New validations include messages at input about problem‐causing format irregularities, updates of Ramachandran and rotamer criteria from the million quality‐filtered residues in a new reference dataset, the CaBLAM Cα‐CO virtual‐angle analysis of backbone and secondary structure for cryoEM or low‐resolution X‐ray, and flagging of the very rare cis‐nonProline and twisted peptides which have recently been greatly overused. Due to wide application of MolProbity validation and corrections by the research community, in Phenix, and at the worldwide Protein Data Bank, newly deposited structures have continued to improve greatly as measured by MolProbity's unique all‐atom clashscore.
Recalibrating the cosmic star formation history Wilkins, Stephen M; Lovell, Christopher C; Stanway, Elizabeth R
Monthly notices of the Royal Astronomical Society,
12/2019, Letnik:
490, Številka:
4
Journal Article
Recenzirano
Odprti dostop
ABSTRACT
The calibrations linking observed luminosities to the star formation rate (SFR) depend on the assumed stellar population synthesis model, initial mass function, star formation and metal ...enrichment history, and whether reprocessing by dust and gas is included. Consequently the shape and normalization of the inferred cosmic star formation history is sensitive to these assumptions. Using v2.2.1 of the Binary Population and Spectral Synthesis (bpass) model we determine a new set of calibration coefficients for the ultraviolet, thermal infrared, and hydrogen recombination lines. These ultraviolet and thermal infrared coefficients are 0.15–0.2 dex higher than those widely utilized in the literature while the H α coefficient is ∼0.35 dex larger. These differences arise in part due to the inclusion binary evolution pathways but predominantly reflect an extension in the IMF to 300 M⊙ and a change in the choice of reference metallicity. We use these new coefficients to recalibrate the cosmic star formation history, and find improved agreement between the integrated cosmic star formation history and the in situ measured stellar mass density as a function of redshift. However, these coefficients produce new tension between SFR densities inferred from the ultraviolet and thermal infrared and those from H α.