ABSTRACT
We propose a deep-learning approach based on generative adversarial networks (GANs) to reduce noise in weak lensing mass maps under realistic conditions. We apply image-to-image translation ...using conditional GANs to the mass map obtained from the first-year data of Subaru Hyper Suprime-Cam (HSC) Survey. We train the conditional GANs by using 25 000 mock HSC catalogues that directly incorporate a variety of observational effects. We study the non-Gaussian information in denoised maps using one-point probability distribution functions (PDFs) and also perform matching analysis for positive peaks and massive clusters. An ensemble learning technique with our GANs is successfully applied to reproduce the PDFs of the lensing convergence. About $60{{\ \rm per\ cent}}$ of the peaks in the denoised maps with height greater than 5σ have counterparts of massive clusters within a separation of 6 arcmin. We show that PDFs in the denoised maps are not compromised by details of multiplicative biases and photometric redshift distributions, nor by shape measurement errors, and that the PDFs show stronger cosmological dependence compared to the noisy counterpart. We apply our denoising method to a part of the first-year HSC data to show that the observed mass distribution is statistically consistent with the prediction from the standard ΛCDM model.
Food contamination caused by radioisotopes released from the Fukushima Dai-ichi nuclear power plant is of great public concern. The contamination risk for food items should be estimated depending on ...the characteristics and geographic environments of each item. However, evaluating current and future risk for food items is generally difficult because of small sample sizes, high detection limits, and insufficient survey periods. We evaluated the risk for aquatic food items exceeding a threshold of the radioactive cesium in each species and location using a statistical model. Here we show that the overall contamination risk for aquatic food items is very low. Some freshwater biota, however, are still highly contaminated, particularly in Fukushima. Highly contaminated fish generally tend to have large body size and high trophic levels.
Abstract
For submillimeter spectroscopy with ground-based single-dish telescopes, removing the noise contribution from the Earth’s atmosphere and the instrument is essential. For this purpose, here ...we propose a new method based on a data-scientific approach. The key technique is statistical matrix decomposition that automatically separates the signals of astronomical emission lines from the drift noise components in the fast-sampled (1–10 Hz) time-series spectra obtained by a position-switching (PSW) observation. Because the proposed method does not apply subtraction between two sets of noisy data (i.e., on-source and off-source spectra), it improves the observation sensitivity by a factor of
2
. It also reduces artificial signals such as baseline ripples on a spectrum, which may also help to improve the effective sensitivity. We demonstrate this improvement by using the spectroscopic data of emission lines toward a high-redshift galaxy observed with a 2 mm receiver on the 50 m Large Millimeter Telescope. Since the proposed method is carried out offline and no additional measurements are required, it offers an instant improvement on the spectra reduced so far with the conventional method. It also enables efficient deep spectroscopy driven by the future 50 m class large submillimeter single-dish telescopes, where fast PSW observations by mechanical antenna or mirror drive are difficult to achieve.
We propose a new generative model of projected cosmic mass density maps inferred from weak gravitational lensing observations of distant galaxies (weak lensing mass maps). We construct the model ...based on a neural style transfer so that it can transform Gaussian weak lensing mass maps into deeply non-Gaussian counterparts as predicted in ray-tracing lensing simulations. We develop an unpaired image-to-image translation method with Cycle-Consistent Generative Adversarial Networks (Cycle GAN), which learn efficient mapping from an input domain to a target domain. Our model is designed to enjoy important advantages; it is trainable with no need for paired simulation data, flexible to make the input domain visually meaningful, and expandable to rapidly-produce a map with a larger sky coverage than training data without additional learning. Using 10,000 lensing simulations, we find that appropriate labeling of training data based on field variance allows the model to reproduce a correct scatter in summary statistics for weak lensing mass maps. Compared with a popular log-normal model, our model improves in predicting the statistical natures of three-point correlations and local properties of rare high-density regions. We also demonstrate that our model enables us to produce a continuous map with a sky coverage of ∼ 166 d e g 2 but similar non-Gaussian features to training data covering ∼ 12 d e g 2 in a GPU minute. Hence, our model can be beneficial to massive productions of synthetic weak lensing mass maps, which is of great importance in future precise real-world analyses.
With an emphasis on improving the fidelity even in super-resolution regimes, new imaging techniques have been intensively developed over the last several years, which may provide substantial ...improvements to the interferometric observation of protoplanetary disks. In this study, sparse modeling (SpM) is applied for the first time to observational data sets taken by the Atacama Large Millimeter/submillimeter Array (ALMA). The two data sets used in this study were taken independently using different array configurations at Band 7 (330 GHz), targeting the protoplanetary disk around HD 142527: one in the shorter-baseline array configuration (∼430 m), and the other in the longer-baseline array configuration (∼1570 m). The image resolutions reconstructed from the two data sets are different by a factor of ∼3. We confirm that the previously known disk structures appear on the images produced by both SpM and CLEAN at the standard beam size. The image reconstructed from the shorter-baseline data using the SpM matches that obtained with the longer-baseline data using the CLEAN, achieving a super-resolution image from which a structure finer than the beam size can be reproduced. Our results demonstrate that ongoing intensive development in the SpM imaging technique is beneficial to imaging with ALMA.
When the information source is a continuous distribution and the rate-distortion function is strictly larger than the Shannon lower bound, the explicit evaluation of the rate-distortion function is ...not straightforward. We evaluate the rate-distortion function for an independent identically distributed gamma source with respect to the absolute-log distortion measure. The logarithmic transformation reduces this rate-distortion problem to that under the absolute distortion measure. Extending the explicit evaluation of the rate-distortion function for the Gaussian sources, we obtain the parametric form of the rate-distortion function. We show that the optimal distribution of reconstruction consists of a continuous component enclosed by left and right discrete components, and the left discrete component vanishes when the acceptable distortion is small. We further extend the result for a wider class of source distributions.
Signal overlapping is a major bottleneck for protein NMR analysis. We propose a new method, stable-isotope-assisted parameter extraction (SiPex), to resolve overlapping signals by a combination of ...amino-acid selective isotope labeling (AASIL) and tensor decomposition. The basic idea of Sipex is that overlapping signals can be decomposed with the help of intensity patterns derived from quantitative fractional AASIL, which also provides amino-acid information. In SiPex, spectra for protein characterization, such as
15
N relaxation measurements, are assembled with those for amino-acid information to form a four-order tensor, where the intensity patterns from AASIL contribute to high decomposition performance even if the signals share similar chemical shift values or characterization profiles, such as relaxation curves. The loading vectors of each decomposed component, corresponding to an amide group, represent both the amino-acid and relaxation information. This information link provides an alternative protein analysis method that does not require “assignments” in a general sense; i.e., chemical shift determinations, since the amino-acid information for some of the residues allows unambiguous assignment according to the dual selective labeling. SiPex can also decompose signals in time-domain raw data without Fourier transform, even in non-uniformly sampled data without spectral reconstruction. These features of SiPex should expand biological NMR applications by overcoming their overlapping and assignment problems.
For short-wavelength VLBI observations, it is difficult to measure the phase of the visibility function accurately. The closure phases are reliable measurements under this situation, though it is not ...sufficient to retrieve all of the phase information. We propose a new method, phase retrieval from closure phase (PRECL). PRECL estimates all the visibility phases only from the closure phases. Combining PRECL with a sparse modeling method we have already proposed, the imaging process of VLBI does not rely on a dirty image or self-calibration. The proposed method is tested numerically and the results are promising.
We discuss a nonparametric estimation method for the mixing distributions in mixture models. The problem is formalized as a minimization of a one-parameter objective functional, which becomes the ...maximum likelihood estimation or the kernel vector quantization in special cases. Generalizing the theorem for the nonparametric maximum likelihood estimation, we prove the existence and discreteness of the optimal mixing distribution and provide an algorithm to calculate it. It is demonstrated that with an appropriate choice of the parameter, the proposed method is less prone to overfitting than the maximum likelihood method. We further discuss the connection between the unifying estimation framework and the rate-distortion problem.