Abstract
Current synoptic sky surveys monitor large areas of the sky to find variable and transient astronomical sources. As the number of detections per night at a single telescope easily exceeds ...several thousand, current detection pipelines make intensive use of machine learning algorithms to classify the detected objects and to filter out the most interesting candidates. A number of upcoming surveys will produce up to three orders of magnitude more data, which renders high-precision classification systems essential to reduce the manual and, hence, expensive vetting by human experts. We present an approach based on convolutional neural networks to discriminate between true astrophysical sources and artefacts in reference-subtracted optical images. We show that relatively simple networks are already competitive with state-of-the-art systems and that their quality can further be improved via slightly deeper networks and additional pre-processing steps – eventually yielding models outperforming state-of-the-art systems. In particular, our best model correctly classifies about 97.3 per cent of all ‘real’ and 99.7 per cent of all ‘bogus’ instances on a test set containing 1942 ‘bogus’ and 227 ‘real’ instances in total. Furthermore, the networks considered in this work can also successfully classify these objects at hand without relying on difference images, which might pave the way for future detection pipelines not containing image subtraction steps at all.
Abstract
Type Iax supernovae (SNe Iax) represent the largest class of peculiar white dwarf supernovae. The type Iax SN 2012Z in NGC 1309 is the only white dwarf supernova with a detected progenitor ...system in pre-explosion observations. Deep Hubble Space Telescope (HST) images taken before SN 2012Z show a luminous, blue source that we have interpreted as a helium-star companion (donor) to the exploding white dwarf. We present here late-time HST observations taken ∼1400 days after the explosion to test this model. We find the SN light curve can empirically be fit by an exponential-decay model in magnitude units. The fitted asymptotic brightness is within 10% of our latest measurements and approximately twice the brightness of the pre-explosion source. The decline of the light curve is too slow to be powered by
56
Co or
57
Co decay: if radioactive decay is the dominate power source, it must be from longer half-life species like
55
Fe. Interaction with circumstellar material may contribute to the light curve, as may shock heating of the companion star. Companion-star models underpredict the observed flux in the optical, producing most of their flux in the UV at these epochs. A radioactively heated bound remnant, left after only a partial disruption of the white dwarf, is also capable of producing the observed excess late-time flux. Our analysis suggests that the total ejecta + remnant mass is consistent with the Chandrasekhar mass for a range of SNe Iax.
Display omitted
•Spatial location as a grouping variable is utilised to construct hierarchical model.•Shrinkage of model parameters can be affected by sample size and lithology types.•Hierarchical ...model can improve the inference accuracy of the linear relationship between metal grades.
Ore sorting is a preconcentration technology and can dramatically reduce energy and water usage to improve the sustainability and profitability of a mining operation. In porphyry Cu deposits, Cu is the primary target, with ores usually containing secondary ‘pay’ metals such as Au, Mo and gangue elements such as Fe and As. Due to sensing technology limitations, secondary and deleterious materials vary in correlation type and strength with Cu but cannot be detected simultaneously via magnetic resonance (MR) ore sorting. Inferring the relationships between Cu and other elemental abundances is particularly critical for mineral processing.
The variations in metal grade relationships occur due to the transition into different geological domains. This raises two questions - how to define these geological domains and how the metal grade relationship is influenced by these geological domains. In this paper, linear relationship is assumed between Cu grade and other metal grades. We applies a Bayesian hierarchical (partial-pooling) model to quantify the linear relationships between Cu, Au, and Fe grades from geochemical bore core data. The hierarchical model was compared with two other models - ‘complete-pooling’ model and ‘no-pooling’ model. Mining blocks were split based on spatial domain to construct hierarchical model. Geochemical bore core data records metal grades measured from laboratory assay with spatial coordinates of sample location. Two case studies from different porphyry Cu deposits were used to evaluate the performance of the hierarchical model. Markov chain Monte Carlo (MCMC) was used to sample the posterior parameters. Our results show that the Bayesian hierarchical model dramatically reduced the posterior predictive variance for metal grades regression compared to the no-pooling model. In addition, the posterior inference in the hierarchical model is insensitive to the choice of prior. The data is well-represented in the posterior which indicates a robust model. The results show that the spatial domain can be successfully utilised for metal grade regression. Uncertainty in estimating the relationship between pay metals and both secondary and gangue elements is quantified and shown to be reduced with partial-pooling. Thus, the proposed Bayesian hierarchical model can offer a reliable and stable way to monitor the relationship between metal grades for ore sorting and other mineral processing options.
Traditional approaches to develop 3D geological models employ a mix of quantitative and qualitative scientific techniques, which do not fully provide quantification of uncertainty in the constructed ...models and fail to optimally weight geological field observations against constraints from geophysical data. Here, using the Bayesian Obsidian software package, we develop a methodology to fuse lithostratigraphic field observations with aeromagnetic and gravity data to build a 3D model in a small (13.5 km × 13.5 km) region of the Gascoyne Province, Western Australia. Our approach is validated by comparing 3D model results to independently-constrained geological maps and cross-sections produced by the Geological Survey of Western Australia. By fusing geological field data with aeromagnetic and gravity surveys, we show that 89% of the modelled region has >95% certainty for a particular geological unit for the given model and data. The boundaries between geological units are characterized by narrow regions with <95% certainty, which are typically 400–1000 m wide at the Earth’s surface and 500–2000 m wide at depth. Beyond ~4 km depth, the model requires geophysical survey data with longer wavelengths (e.g., active seismic) to constrain the deeper subsurface. Although Obsidian was originally built for sedimentary basin problems, there is reasonable applicability to deformed terranes such as the Gascoyne Province. Ultimately, modification of the Bayesian engine to incorporate structural data will aid in developing more robust 3D models. Nevertheless, our results show that surface geological observations fused with geophysical survey data can yield reasonable 3D geological models with narrow uncertainty regions at the surface and shallow subsurface, which will be especially valuable for mineral exploration and the development of 3D geological models under cover.
Display omitted
•Bayesian fusion of lithostratigraphic observations with geophysical data•Technique validated in a data-rich area of the Gascoyne Province, Western Australia•Almost 90% of region’s surface modelled at >95% certainty•Less precise constraints at depths; poor constraints deeper than 4 km.•Technique useful for greenfields mineral exploration
We put constraints on the properties of the progenitors of peculiar calcium-rich transients using the distribution of locations within their host galaxies. We confirm that this class of transients do ...not follow the galaxy stellar mass profile and are more likely to be found in remote locations of their apparent hosts. We test the hypothesis that these transients are from low-metallicity progenitors by comparing their spatial distributions with the predictions of self-consistent cosmological simulations that include star formation and chemical enrichment. We find that while metal-poor stars and our transient sample show a consistent preference for large offsets, metallicity alone cannot explain the extreme cases. Invoking a lower age limit on the progenitor helps to improve the match, indicating these events may result from a very old metal-poor population. We also investigate the radial distribution of globular cluster systems, and show that they too are consistent with the class of calcium-rich transients. Because photometric upper limits exist for globular clusters for some members of the class, a production mechanism related to the dense environment of globular clusters is not favoured for the calcium-rich events. However, the methods developed in this paper may be used in the future to constrain the effects of low metallicity on radially distant core-collapse events or help establish a correlation with globular clusters for other classes of peculiar explosions.
Display omitted
•Consideration of multiple conceptual Fe ore mineral systems models for mineral prospectivity modelling.•Inclusion of depth representation for 2D prospectivity analyses results in ...richer structural representation.•Use of informative and geologically relevant negative training point examples.•Mineral prospectivity model assessment using geological plausibility.
The past two decades have seen a rapid adoption of artificial intelligence methods applied to mineral exploration. More recently, the easier acquisition of some types of data has inspired a broad literature that has examined many machine learning and modelling techniques that combine exploration criteria, or ‘features’, to generate predictions for mineral prospectivity. Central to the design of prospectivity models is a ‘mineral system’, a conceptual model describing the key geological elements that control the timing and location of economic mineralisation. The mineral systems model defines what constitutes a training set, which features represent geological evidence of mineralisation, how features are engineered and what modelling methods are used. Mineral systems are knowledge-driven conceptual models, thus all parameter choices are subject to human biases and opinion so alternative models are possible. However, the effect of alternative mineral systems models on prospectivity is rarely compared despite the potential to heavily influence final predictions. In this study, we focus on the effect of conceptual uncertainty on Fe ore prospectivity models in the Hamersley region, Western Australia. Four important considerations are tested. (1) Five different supergene and hypogene conceptual mineral systems models guide the inputs for five forest-based classification prospectivity models model. (2) To represent conceptual uncertainty, the predictions are then combined for prospectivity model comparison. (3) Representation of three-dimensional objects as two-dimensional features are tested to address commonly ignored thickness of geological units. (4) The training dataset is composed of known economic mineralisation sites (deposits) as ‘positive’ examples, and exploration drilling data providing ‘negative’ sampling locations. Each of the spatial predictions are assessed using independent performance metrics common to AI-based classification methods and subjected to geological plausibility testing. We find that different conceptual mineral systems produce significantly different spatial predictions, thus conceptual uncertainty must be recognised. A benefit to recognising and modelling different conceptual models is that robust and geologically plausible predictions can be made that may guide mineral discovery.
Insulin triggers an extensive signaling cascade to coordinate adipocyte glucose metabolism. It is considered that the major role of insulin is to provide anabolic substrates by activating ...GLUT4-dependent glucose uptake. However, insulin stimulates phosphorylation of many metabolic proteins. To examine the implications of this on glucose metabolism, we performed dynamic tracer metabolomics in cultured adipocytes treated with insulin. Temporal analysis of metabolite concentrations and tracer labeling revealed rapid and distinct changes in glucose metabolism, favoring specific glycolytic branch points and pyruvate anaplerosis. Integrating dynamic metabolomics and phosphoproteomics data revealed that insulin-dependent phosphorylation of anabolic enzymes occurred prior to substrate accumulation. Indeed, glycogen synthesis was activated independently of glucose supply. We refer to this phenomenon as metabolic priming, whereby insulin signaling creates a demand-driven system to “pull” glucose into specific anabolic pathways. This complements the supply-driven regulation of anabolism by substrate accumulation and highlights an additional role for insulin action in adipocyte glucose metabolism.
Display omitted
•Dynamic 13C-tracer metabolomics shows insulin rapidly alters adipocyte metabolism•Glucose flow favors specific pathways, such as pyruvate anaplerosis•Besides glucose uptake, insulin triggers anabolism before substrates accumulate•Insulin-dependent phosphorylation primes adipocytes for glucose metabolism
Krycer et al. explore how insulin regulates adipocyte metabolism. It is widely held that energy storage (anabolism) occurs as a substrate accumulates. However, using dynamic tracer metabolomics and overlaying phosphoproteomics data, they find that insulin signaling triggers anabolism before substrates accumulate, creating a “demand-driven” system to prime adipocytes for glucose metabolism.
Abstract
Bayesian optimization (BO) has been a successful approach to optimize expensive functions whose prior knowledge can be specified by means of a probabilistic model. Due to their ...expressiveness and tractable closed-form predictive distributions, Gaussian process (GP) surrogate models have been the default go-to choice when deriving BO frameworks. However, as nonparametric models, GPs offer very little in terms of interpretability and informative power when applied to model complex physical phenomena in scientific applications. In addition, the Gaussian assumption also limits the applicability of GPs to problems where the variables of interest may highly deviate from Gaussianity. In this article, we investigate an alternative modeling framework for BO which makes use of sequential Monte Carlo (SMC) to perform Bayesian inference with parametric models. We propose a BO algorithm to take advantage of SMC’s flexible posterior representations and provide methods to compensate for bias in the approximations and reduce particle degeneracy. Experimental results on simulated engineering applications in detecting water leaks and contaminant source localization are presented showing performance improvements over GP-based BO approaches.