This paper proposes a stochastic model of a bipartite credit network between banks and the non-bank corporate sector that encapsulates basic stylized facts found in comprehensive data sets for ...bank-firm loans for a number of countries. When performing computational experiments with this model, we find that it shows a pronounced non-linear behavior under shocks: the default of a single unit will mostly have practically no knock-on effects, but might lead to an almost full-scale collapse of the entire system in a certain number of cases. The dependency of the overall outcome on firm characteristics like size or number of loans seems fuzzy. Distinguishing between contagion due to interbank credit and due to joint exposures to counterparty risk via loans to firms, the later channel appears more important for contagious spread of defaults.
We explore the network topology arising from a dataset of the overnight interbank transactions on the e-MID trading platform from January 1999 to December 2010. In order to shed light on the ...hierarchical structure of the banking system, we estimate different versions of a core–periphery model. Our main findings are: (1) the identified core is quite stable over time in its size as well as in many structural properties, (2) there is also high persistence over time of banks’ identified positions as members of the core or periphery, (3) allowing for asymmetric ‘coreness’ with respect to lending and borrowing considerably improves the fit and reveals a high level of asymmetry and relatively little correlation between banks’ ‘in-coreness’ and ‘out-coreness’, and (4) we show that the identified core–periphery structure could not have been obtained spuriously from random networks. During the financial crisis of 2008, the reduction of interbank lending was mainly due to core banks reducing their numbers of active outgoing links.
This paper develops a methodology for estimating the parameters of dynamic opinion or expectation formation processes with social interactions. We study a simple stochastic framework of a collective ...process of opinion formation by a group of agents who face a binary decision problem. The aggregate dynamics of the individuals’ decisions can be analyzed via the stochastic process governing the ensemble average of choices. Numerical approximations to the transient density for this ensemble average allow the evaluation of the likelihood function on the base of discrete observations of the social dynamics. This approach can be used to estimate the parameters of the opinion formation process from aggregate data on its average realization. Our application to a well-known business climate index provides strong indication of social interaction as an important element in respondents’ assessment of the business climate.
We use weekly survey data on short-term and medium-term sentiment of German investors in order to study the causal relationship between investors’ mood and subsequent stock price changes. In contrast ...to extant literature for other countries, a trivariate vector autoregression for short-run sentiment, medium-run sentiment, and stock index returns allows to reject exogeneity of returns. Depending on the chosen VAR specification, returns are found to either follow a feedback process caused by medium-run sentiment, or returns form a simultaneous systems together with the two sentiment measures. An out-of-sample forecasting experiment on the base of estimated subset VAR models shows significant exploitable linear structure. However, trading experiments do not yield convincing evidence of significant economic gains from the VAR forecasts, and it appears that predictability of returns from sentiment decreases during the recent market gyrations.
Sequence assembly of large and repeat-rich plant genomes has been challenging, requiring substantial computational resources and often several complementary sequence assembly and genome mapping ...approaches. The recent development of fast and accurate long-read sequencing by circular consensus sequencing (CCS) on the PacBio platform may greatly increase the scope of plant pan-genome projects. Here, we compare current long-read sequencing platforms regarding their ability to rapidly generate contiguous sequence assemblies in pan-genome studies of barley (Hordeum vulgare). Most long-read assemblies are clearly superior to the current barley reference sequence based on short-reads. Assemblies derived from accurate long reads excel in most metrics, but the CCS approach was the most cost-effective strategy for assembling tens of barley genomes. A downsampling analysis indicated that 20-fold CCS coverage can yield very good sequence assemblies, while even five-fold CCS data may capture the complete sequence of most genes. We present an updated reference genome assembly for barley with near-complete representation of the repeat-rich intergenic space. Long-read assembly can underpin the construction of accurate and complete sequences of multiple genomes of a species to build pan-genome infrastructures in Triticeae crops and their wild relatives.
Chromosome-scale genome sequence assemblies underpin pan-genomic studies. Recent genome assembly efforts in the large-genome Triticeae crops wheat and barley have relied on the commercial ...closed-source assembly algorithm DeNovoMagic. We present TRITEX, an open-source computational workflow that combines paired-end, mate-pair, 10X Genomics linked-read with chromosome conformation capture sequencing data to construct sequence scaffolds with megabase-scale contiguity ordered into chromosomal pseudomolecules. We evaluate the performance of TRITEX on publicly available sequence data of tetraploid wild emmer and hexaploid bread wheat, and construct an improved annotated reference genome sequence assembly of the barley cultivar Morex as a community resource.
RNA-seq is a fundamental technique in genomics, yet reference bias, where transcripts derived from non-reference alleles are quantified less accurately, can undermine the accuracy of RNA-seq ...quantification and thus the conclusions made downstream. Reference bias in RNA-seq analysis has yet to be explored in complex polyploid genomes despite evidence that they are often a complex mosaic of wild relative introgressions, which introduce blocks of highly divergent genes.
Here we use hexaploid wheat as a model complex polyploid, using both simulated and experimental data to show that RNA-seq alignment in wheat suffers from widespread reference bias which is largely driven by divergent introgressed genes. This leads to underestimation of gene expression and incorrect assessment of homoeologue expression balance. By incorporating gene models from ten wheat genome assemblies into a pantranscriptome reference, we present a novel method to reduce reference bias, which can be readily scaled to capture more variation as new genome and transcriptome data becomes available.
This study shows that the presence of introgressions can lead to reference bias in wheat RNA-seq analysis. Caution should be exercised by researchers using non-sample reference genomes for RNA-seq alignment and novel methods, such as the one presented here, should be considered.
We add a simple dynamic process for adaptive “social distancing” measures to a standard SIR model of the COVID pandemic. With a limited attention span and in the absence of a consistent long-term ...strategy against the pandemic, this process leads to a sweeping of an instability, i.e. fluctuations in the effective reproduction number around its bifurcation value of Reff=1. While mitigating the pandemic in the short-run, this process remains intrinsically fragile and does not constitute a sustainable strategy that societies could follow for an extended period of time.
•We add a simple dynamic process for adaptive “social distancing” to a standard SIR model of the COVID pandemic.•This combined process leads to a sweeping of an instability, i.e. fluctuations around a bifurcation value.•This process remains intrinsically fragile and does not constitute a sustainable strategy against the pandemic.
This work proposes and analyzes a methodology for finding least-squares solutions to the systems of polynomial equations. Systems of polynomial equations are ubiquitous in computational science, with ...major applications in machine learning and computer security (i.e., model fitting and integer factorization). The proposed methodology maps the squared-error function for a polynomial equation onto the Ising–Hamiltonian model, ensuring that the approximate solutions (by least squares) to real-world problems can be computed on a quantum annealer even when the exact solutions do not exist. Hamiltonians for integer factorization and polynomial systems of equations are implemented and analyzed for both logical optimality and physical practicality on modern quantum annealing hardware.