Physiologically-based kinetic (PBK) models are widely used in pharmacology and toxicology for predicting the internal disposition of substances upon exposure, voluntarily or not. Due to their ...complexity, a large number of model parameters need to be estimated, either through in silico tools, in vitro experiments or by fitting the model to in vivo data. In the latter case, fitting complex structural models on in vivo data can result in overparameterisation and produce unrealistic parameter estimates. To address these issues, we propose a novel parameter grouping approach, which reduces the parametric space by co-estimating groups of parameters across compartments. Grouping of parameters is performed using genetic algorithms and is fully automated, based on a novel goodness-of-fit metric. To illustrate the practical application of the proposed methodology, two case studies were conducted. The first case study demonstrates the development of a new PBK model, while the second focuses on model refinement. In the first case study, a PBK model was developed to elucidate the biodistribution of titanium dioxide (TiO2) nanoparticles in rats following intravenous injection. A variety of parameter estimation schemes were employed. Comparative analysis based on goodness-of-fit metrics demonstrated that the proposed methodology yields models that outperform standard estimation approaches, while utilising a reduced number of parameters. In the second case study, an existing PBK model for perfluorooctanoic acid (PFOA) in rats was extended to incorporate additional tissues, providing a more a comprehensive portrayal of PFOA biodistribution. Both models were validated through independent in vivo studies to ensure their reliability.
The aim of this study is to benchmark two Bayesian software tools, namely
Stan
and
GNU MCSim
, that use different Markov chain Monte Carlo (MCMC) methods for the estimation of physiologically based ...pharmacokinetic (PBPK) model parameters. The software tools were applied and compared on the problem of updating the parameters of a Diazepam PBPK model, using time-concentration human data. Both tools produced very good fits at the individual and population levels, despite the fact that
GNU MCSim
is not able to consider multivariate distributions.
Stan
outperformed
GNU MCSim
in sampling efficiency, due to its almost uncorrelated sampling. However,
GNU MCSim
exhibited much faster convergence and performed better in terms of effective samples produced per unit of time.
Display omitted
•The variability of nanomaterials makes case-by-case risk assessment impractical.•Read-across and machine learning approaches are required to ensure consumer safety.•Access to ...harmonized and integrated datasets for modelling is a current bottleneck.•Risk prediction requires integration of models covering release, exposure and hazards.•NanoSolveIT integrates multi-scale models into an in silico risk assessment framework.
Nanotechnology has enabled the discovery of a multitude of novel materials exhibiting unique physicochemical (PChem) properties compared to their bulk analogues. These properties have led to a rapidly increasing range of commercial applications; this, however, may come at a cost, if an association to long-term health and environmental risks is discovered or even just perceived. Many nanomaterials (NMs) have not yet had their potential adverse biological effects fully assessed, due to costs and time constraints associated with the experimental assessment, frequently involving animals. Here, the available NM libraries are analyzed for their suitability for integration with novel nanoinformatics approaches and for the development of NM specific Integrated Approaches to Testing and Assessment (IATA) for human and environmental risk assessment, all within the NanoSolveIT cloud-platform. These established and well-characterized NM libraries (e.g. NanoMILE, NanoSolutions, NANoREG, NanoFASE, caLIBRAte, NanoTEST and the Nanomaterial Registry (>2000 NMs)) contain physicochemical characterization data as well as data for several relevant biological endpoints, assessed in part using harmonized Organisation for Economic Co-operation and Development (OECD) methods and test guidelines. Integration of such extensive NM information sources with the latest nanoinformatics methods will allow NanoSolveIT to model the relationships between NM structure (morphology), properties and their adverse effects and to predict the effects of other NMs for which less data is available. The project specifically addresses the needs of regulatory agencies and industry to effectively and rapidly evaluate the exposure, NM hazard and risk from nanomaterials and nano-enabled products, enabling implementation of computational ‘safe-by-design’ approaches to facilitate NM commercialization.
Traditional animal testing for toxicity is expensive, time consuming, ethically questioned, sometimes inaccurate because of the necessity to extrapolate from animal to human, and in most cases not ...formally validated according to modern standards. This is driving regulatory bodies and companies in backing alternative methods focusing on in silico and in vitro approaches. These are complex to implement and validate, and their wide adoption is not yet established despite legal directives providing an imperative. It is difficult to link a cell level response to effects on a whole organism, but the advances in high-throughput toxicogenomics towards elucidating the mechanism of action of substances are gradually reducing this gap and fostering the adoption of Next Generation Safety Assessment approaches. Quantitative in vitro to in vivo extrapolation (QIVIVE) methods hold the promise to reveal how to use in vitro -omics data to predict the potential for in vivo toxicity. They could improve lead compounds prioritisation, reduce time and costs, also in numbers of animal lives, and help with the complexity of extrapolating between species. We provide a description of QIVIVE state of the art, including how the problems of dosing and timing are being approached, how in silico simulation can take into account the variability of individuals, and how multiple techniques can be integrated to face complex tasks like the prediction of long term toxicity, including a close look into the open problems and challenges ahead.