Overflow metabolism is well known for yeast, bacteria and mammalian cells. It typically occurs under glucose excess conditions and is characterized by excretions of by-products such as ethanol, ...acetate or lactate. This phenomenon, also denoted the short-term Crabtree effect, has been extensively studied over the past few decades, however, its basic regulatory mechanism and functional role in metabolism is still unknown. Here we present a comprehensive quantitative and time-dependent analysis of the exometabolome of Escherichia coli, Corynebacterium glutamicum, Bacillus licheniformis, and Saccharomyces cerevisiae during well-controlled bioreactor cultivations. Most surprisingly, in all cases a great diversity of central metabolic intermediates and amino acids is found in the culture medium with extracellular concentrations varying in the micromolar range. Different hypotheses for these observations are formulated and experimentally tested. As a result, the intermediates in the culture medium during batch growth must originate from passive or active transportation due to a new phenomenon termed "extended" overflow metabolism. Moreover, we provide broad evidence that this could be a common feature of all microorganism species when cultivated under conditions of carbon excess and non-inhibited carbon uptake. In turn, this finding has consequences for metabolite balancing and, particularly, for intracellular metabolite quantification and (13)C-metabolic flux analysis.
Metabolic flux analysis (MFA) deals with the experimental determination of steady-state fluxes in metabolic networks. An important feature of the
13C MFA method is its capability to generate ...information on both directions of bidirectional reaction steps given by exchange fluxes. The biological interpretation of these exchange fluxes and their relation to thermodynamic properties of the respective reaction steps has never been systematically investigated. As a central result, it is shown here that for a general class of enzyme reaction mechanisms the quotients of net and exchange fluxes measured by
13C MFA are coupled to Gibbs energies of the reaction steps. To establish this relation the concept of apparent flux ratios of enzymatic isotope-labeling networks is introduced and some computing rules for these flux ratios are given. Application of these rules reveals a conceptional pitfall of
13C MFA, which is the inherent dependency of measured exchange fluxes on the chosen tracer atom. However, it is shown that this effect can be neglected for typical biochemical reaction steps under physiological conditions. In this situation, the central result can be formulated as a two-sided inequality relating fluxes, pool sizes, and standard Gibbs energies. This relation has far-reaching consequences for metabolic flux analysis, quantitative metabolomics, and network thermodynamics.
Thinning is a sub-sampling technique to reduce the memory footprint of Markov chain Monte Carlo. Despite being commonly used, thinning is rarely considered efficient. For sampling constraint-based ...models, a highly relevant use-case in systems biology, we here demonstrate that thinning boosts computational and, thereby, sampling efficiencies of the widely used Coordinate Hit-and-Run with Rounding (CHRR) algorithm. By benchmarking CHRR with thinning with simplices and genome-scale metabolic networks of up to thousands of dimensions, we find a substantial increase in computational efficiency compared to unthinned CHRR, in our examples by orders of magnitude, as measured by the effective sample size per time (ESS/t), with performance gains growing with polytope (effective network) dimension. Using a set of benchmark models we derive a ready-to-apply guideline for tuning thinning to efficient and effective use of compute resources without requiring additional coding effort. Our guideline is validated using three (out-of-sample) large-scale networks and we show that it allows sampling convex polytopes uniformly to convergence in a fraction of time, thereby unlocking the rigorous investigation of hitherto intractable models. The derivation of our guideline is explained in detail, allowing future researchers to update it as needed as new model classes and more training data becomes available. CHRR with deliberate utilization of thinning thereby paves the way to keep pace with progressing model sizes derived with the constraint-based reconstruction and analysis (COBRA) tool set. Sampling and evaluation pipelines are available at https://jugit.fz-juelich.de/IBG-1/ModSim/fluxomics/chrrt.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
High-throughput experimentation has revolutionized data-driven experimental sciences and opened the door to the application of machine learning techniques. Nevertheless, the quality of any data ...analysis strongly depends on the quality of the data and specifically the degree to which random effects in the experimental data-generating process are quantified and accounted for. Accordingly calibration, i.e. the quantitative association between observed quantities and measurement responses, is a core element of many workflows in experimental sciences. Particularly in life sciences, univariate calibration, often involving non-linear saturation effects, must be performed to extract quantitative information from measured data. At the same time, the estimation of uncertainty is inseparably connected to quantitative experimentation. Adequate calibration models that describe not only the input/output relationship in a measurement system but also its inherent measurement noise are required. Due to its mathematical nature, statistically robust calibration modeling remains a challenge for many practitioners, at the same time being extremely beneficial for machine learning applications. In this work, we present a bottom-up conceptual and computational approach that solves many problems of understanding and implementing non-linear, empirical calibration modeling for quantification of analytes and process modeling. The methodology is first applied to the optical measurement of biomass concentrations in a high-throughput cultivation system, then to the quantification of glucose by an automated enzymatic assay. We implemented the conceptual framework in two Python packages, calibr8 and murefi, with which we demonstrate how to make uncertainty quantification for various calibration tasks more accessible. Our software packages enable more reproducible and automatable data analysis routines compared to commonly observed workflows in life sciences. Subsequently, we combine the previously established calibration models with a hierarchical Monod-like ordinary differential equation model of microbial growth to describe multiple replicates of Corynebacterium glutamicum batch cultures. Key process model parameters are learned by both maximum likelihood estimation and Bayesian inference, highlighting the flexibility of the statistical and computational framework.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Quantitative characterization of biotechnological production processes requires the determination of different key performance indicators (KPIs) such as titer, rate and yield. Classically, these KPIs ...can be derived by combining black‐box bioprocess modeling with non‐linear regression for model parameter estimation. The presented pyFOOMB package enables a guided and flexible implementation of bioprocess models in the form of ordinary differential equation systems (ODEs). By building on Python as powerful and multi‐purpose programing language, ODEs can be formulated in an object‐oriented manner, which facilitates their modular design, reusability, and extensibility. Once the model is implemented, seamless integration and analysis of the experimental data is supported by various Python packages that are already available. In particular, for the iterative workflow of experimental data generation and subsequent model parameter estimation we employed the concept of replicate model instances, which are linked by common sets of parameters with global or local properties. For the description of multi‐stage processes, discontinuities in the right‐hand sides of the differential equations are supported via event handling using the freely available assimulo package. Optimization problems can be solved by making use of a parallelized version of the generalized island approach provided by the pygmo package. Furthermore, pyFOOMB in combination with Jupyter notebooks also supports education in bioprocess engineering and the applied learning of Python as scientific programing language. Finally, the applicability and strengths of pyFOOMB will be demonstrated by a comprehensive collection of notebook examples.
Metabolic fluxes are the manifestations of the co-operating actions in a complex network of genes, transcripts, proteins, and metabolites. As a final quantitative endpoint of all cellular ...interactions, the intracellular fluxes are of immense interest in fundamental as well as applied research. Unlike the quantities of interest in most
omics
levels, in vivo fluxes are, however, not directly measureable. In the last decade,
13
C-based metabolic flux analysis emerged as the state-of-the-art technique to infer steady-state fluxes by data from labeling experiments and the use of mathematical models. A very promising new area in systems metabolic engineering research is non-stationary
13
C-metabolic flux analysis at metabolic steady-state conditions. Several studies have demonstrated an information surplus contained in transient labeling data compared to those taken at the isotopic equilibrium, as it is classically done. Enabled by recent, fairly multi-disciplinary progress, the new method opens several attractive options to (1) generate new insights, e.g., in cellular storage metabolism or the dilution of tracer by endogenous pools and (2) shift limits, inherent in the classical approach, towards enhanced applicability with respect to cultivation conditions and biological systems. We review the new developments in metabolome-based non-stationary
13
C flux analysis and outline future prospects for accurate in vivo flux measurement.
Celotno besedilo
Dostopno za:
CEKLJ, DOBA, EMUNI, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SBMB, SBNM, UILJ, UKNU, UL, UM, UPUK
Corynebacterium glutamicum
belongs to the microbes of enormous biotechnological relevance. In particular, its strain ATCC 13032 is a widely used producer of L-amino acids at an industrial scale. Its ...apparent robustness also turns it into a favorable platform host for a wide range of further compounds, mainly because of emerging bio-based economies. A deep understanding of the biochemical processes in
C. glutamicum
is essential for a sustainable enhancement of the microbe's productivity. Computational systems biology has the potential to provide a valuable basis for driving metabolic engineering and biotechnological advances, such as increased yields of healthy producer strains based on genome-scale metabolic models (GEMs). Advanced reconstruction pipelines are now available that facilitate the reconstruction of GEMs and support their manual curation. This article presents
i
CGB21FR, an updated and unified GEM of
C. glutamicum
ATCC 13032 with high quality regarding comprehensiveness and data standards, built with the latest modeling techniques and advanced reconstruction pipelines. It comprises 1042 metabolites, 1539 reactions, and 805 genes with detailed annotations and database cross-references. The model validation took place using different media and resulted in realistic growth rate predictions under aerobic and anaerobic conditions. The new GEM produces all canonical amino acids, and its phenotypic predictions are consistent with laboratory data. The
in silico
model proved fruitful in adding knowledge to the metabolism of
C. glutamicum
:
i
CGB21FR still produces L-glutamate with the knock-out of the enzyme pyruvate carboxylase, despite the common belief to be relevant for the amino acid's production. We conclude that integrating high standards into the reconstruction of GEMs facilitates replicating validated knowledge, closing knowledge gaps, and making it a useful basis for metabolic engineering. The model is freely available from BioModels Database under identifier
MODEL2102050001
.
The split GFP assay is a well-known technology for activity-independent screening of target proteins. A superfolder GFP is split into two non-fluorescent parts, GFP11 which is fused to the target ...protein and GFP1-10. In the presence of both, GFP1-10 and the GFP11-tag are self-assembled and a functional chromophore is formed. However, it relies on the availability and quality of GFP1-10 detector protein to develop fluorescence by assembly with the GFP11-tag connected to the target protein. GFP1-10 detector protein is often produced in small scale shake flask cultivation and purified from inclusion bodies.
The production of GFP1-10 in inclusion bodies and purification was comprehensively studied based on Escherichia coli as host. Cultivation in complex and defined medium as well as different feed strategies were tested in laboratory-scale bioreactor cultivation and a standardized process was developed providing high quantity of GFP1-10 detector protein with suitable quality. Split GFP assay was standardized to obtain robust and reliable assay results from cutinase secretion strains of Corynebacterium glutamicum with Bacillus subtilis Sec signal peptides NprE and Pel. Influencing factors from environmental conditions, such as pH and temperature were thoroughly investigated.
GFP1-10 detector protein production could be successfully scaled from shake flask to laboratory scale bioreactor. A single run yielded sufficient material for up to 385 96-well plate screening runs. The application study with cutinase secretory strains showed very high correlation between measured cutinase activity to split GFP fluorescence signal proofing applicability for larger screening studies.