The whole blood pyrogen test invented 25 years ago, and its variant based on cryo-preserved blood one year later, brought momentum into the field of pyrogen testing, which, despite the broad ...application of the Limulus amebocyte lysate (LAL) assay, aka bacterial endotoxin test (BET), consumed several hundred thousand rabbits per year world-wide. The resulting international validation and lengthy acceptance and implementation process of what are called now monocyte activation tests (MATs) finally is impacting on animal numbers - at least in Europe - reducing them by more than 70% and counting. The author sees no reason for continuing any regulatory rabbit testing for pyrogens except the lack of acceptance of MATs in some regions of the world. The availability of MATs has opened also the discussion about the shortcomings of LAL/BET, namely its restriction to Gram-negative pyrogens, non-reflection of the potency of these in humans, interference and masking by many products, and animal welfare concerns for horseshoe crabs. The obvious advantages of MATs in all these respects should lead to a shift from LAL/BET to MATs. We are starting to see this for vac-cines and medical devices, but other areas like safety testing of blood transfusions, cell therapies and nanomaterials, and the assessment of air-borne pyrogens still need to grasp the opportunity provided by MATs. While the different MATs can jointly serve these needs, the whole blood MAT has some advantages as discussed here.
Earlier we created a chemical hazard database via natural language processing of dossiers submitted to the European Chemical Agency with approximately 10 000 chemicals. We identified repeat OECD ...guideline tests to establish reproducibility of acute oral and dermal toxicity, eye and skin irritation, mutagenicity and skin sensitization. Based on 350-700+ chemicals each, the probability that an OECD guideline animal test would output the same result in a repeat test was 78%-96% (sensitivity 50%-87%). An expanded database with more than 866 000 chemical properties/hazards was used as training data and to model health hazards and chemical properties. The constructed models automate and extend the read-across method of chemical classification. The novel models called RASARs (read-across structure activity relationship) use binary fingerprints and Jaccard distance to define chemical similarity. A large chemical similarity adjacency matrix is constructed from this similarity metric and is used to derive feature vectors for supervised learning. We show results on 9 health hazards from 2 kinds of RASARs-"Simple" and "Data Fusion". The "Simple" RASAR seeks to duplicate the traditional read-across method, predicting hazard from chemical analogs with known hazard data. The "Data Fusion" RASAR extends this concept by creating large feature vectors from all available property data rather than only the modeled hazard. Simple RASAR models tested in cross-validation achieve 70%-80% balanced accuracies with constraints on tested compounds. Cross validation of data fusion RASARs show balanced accuracies in the 80%-95% range across 9 health hazards with no constraints on tested compounds.
The rapid progress of AI impacts various areas of life, including toxicology, and promises a major role for AI in future risk assessments. Toxicology has shifted from a purely empirical science ...focused on observing chemical exposure outcomes to a data-rich field ripe for AI integration. AI methods are well-suited to handling and integrating large, diverse data volumes - a key challenge in modern toxicology. Additionally, AI enables Predictive Toxicology, as demonstrated by the automated read-across tool RASAR that achieved 87% balanced accuracy across nine OECD tests and 190,000 chemicals, outperforming animal test reproducibility. AI’s ability to handle big data and provide probabilistic outputs facilitates probabilistic risk assessment. Rather than just replicating human skills at larger scales, AI should be viewed as a transformative technology. Despite potential challenges, like model black-boxing and dataset biases, explainable AI (xAI) is emerging to address these issues.
Animals like mice and rats have long been used in medical research to help understand disease and test potential new treatments before human trials. However, while animal studies have contributed to ...important advances, too much reliance on animal models can also mislead drug development. This article explains for a general audience how animal research is used to develop new medicines, its benefits and limitations, and how more accurate and humane techniques—alternatives to animal testing—could improve this process.
The whole blood pyrogen test was first described in this journal exactly twenty years ago. It employs the cytokine response of blood monocytes for the detection of microbiological contaminants with ...the potential to finally replace the still broadly used rabbit pyrogen test. The article reviews its development process, the current status of the test as well as the challenges and missed opportunities. The article highlights the enormous efforts of many people to get the test to where it is today. But it also shows the incredible missed opportunities for implementation and thus sparing about 400,000 rabbits still used for this purpose per year worldwide; in the EU, since the official acceptance of the test, the number of animals used for pyrogen testing did not fall but increased by about 10,000 to 170,000. The test is the first solution enabling adequate pyrogen testing of cell therapies, including blood transfusions, and medical devices, but has not been implemented for either application by authorities. As the test can quantitatively assess human-relevant airborne pyrogens, the contribution of pyrogens to chronic obstructive lung diseases and childhood asthma can for the first time be defined and home and workplace safety improved in the future.
We created earlier a large machine‐readable database of 10,000 chemicals and 800,000 associated studies by natural language processing of the public parts of Registration, Evaluation, Authorisation ...and Restriction of Chemicals (REACH) registrations until December 2014. This database was used to assess the reproducibility of the six most frequently used Organisation for Economic Co‐operation and Development (OECD) guideline tests. These tests consume 55% of all animals in safety testing in Europe, i.e. about 600,000 animals. With 350–750 chemicals with multiple results per test, reproducibility (balanced accuracy) was 81% and 69% of toxic substances were found again in a repeat experiment (sensitivity 69%). Inspired by the increasingly used read‐across approach, we created a new type of QSAR, which is based on similarity of chemicals and not on chemical descriptors. A landscape of the chemical universe using 10 million structures was calculated, when based on Tanimoto indices similar chemicals are close and dissimilar chemicals far from each other. This allows placing any chemical of interest into the map and evaluating the information available for surrounding chemicals. In a data fusion approach, in which 74 different properties were taken into consideration, machine learning (random forest) allowed a fivefold cross‐validation for 190,000 (non‐) hazard labels of chemicals for which nine hazards were predicted. The balanced accuracy of this approach was 87% with a sensitivity of 89%. Each prediction comes with a certainty measure based on the homogeneity of data and distance of neighbours. Ongoing developments and future opportunities are discussed.
Alternative methods to animal use in toxicology are evolving with new advanced tools and multilevel approaches, to answer from one side to 3Rs requirements, and on the other side offering relevant ...and valid tests for drugs and chemicals, considering also their combination in test strategies, for a proper risk assessment.
While stand-alone methods, have demonstrated to be applicable for some specific toxicological predictions with some limitations, the new strategy for the application of New Approach Methods (NAM), to solve complex toxicological endpoints is addressed by Integrated Approaches for Testing and Assessment (IATA), aka Integrated Testing Strategies (ITS) or Defined Approaches for Testing and Assessment (DA). The central challenge of evidence integration is shared with the needs of risk assessment and systematic reviews of an evidence-based Toxicology. Increasingly, machine learning (aka Artificial Intelligence, AI) lends itself to integrate diverse evidence streams.
In this article, we give an overview of the state of the art of alternative methods and IATA in toxicology for regulatory use for various hazards, outlining future orientation and perspectives. We call on leveraging the synergies of integrated approaches and evidence integration from in vivo, in vitro and in silico as true in vivitrosi.
Many drugs have progressed through preclinical and clinical trials and have been available - for years in some cases - before being recalled by the FDA for unanticipated toxicity in humans. One ...reason for such poor translation from drug candidate to successful use is a lack of model systems that accurately recapitulate normal tissue function of human organs and their response to drug compounds. Moreover, tissues in the body do not exist in isolation, but reside in a highly integrated and dynamically interactive environment, in which actions in one tissue can affect other downstream tissues. Few engineered model systems, including the growing variety of organoid and organ-on-a-chip platforms, have so far reflected the interactive nature of the human body. To address this challenge, we have developed an assortment of bioengineered tissue organoids and tissue constructs that are integrated in a closed circulatory perfusion system, facilitating inter-organ responses. We describe a three-tissue organ-on-a-chip system, comprised of liver, heart, and lung, and highlight examples of inter-organ responses to drug administration. We observe drug responses that depend on inter-tissue interaction, illustrating the value of multiple tissue integration for in vitro study of both the efficacy of and side effects associated with candidate drugs.
Neurodevelopment is uniquely sensitive to toxic insults and there are concerns that environmental chemicals are contributing to widespread subclinical developmental neurotoxicity (DNT). Increased DNT ...evaluation is needed due to the lack of such information for most chemicals in common use, but in vivo studies recommended in regulatory guidelines are not practical for the large-scale screening of potential DNT chemicals. It is widely acknowledged that developmental neurotoxicity is a consequence of disruptions to basic processes in neurodevelopment and that testing strategies using human cell-based in vitro systems that mimic these processes could aid in prioritizing chemicals with DNT potential. Myelination is a fundamental process in neurodevelopment that should be included in a DNT testing strategy, but there are very few in vitro models of myelination. Thus, there is a need to establish an in vitro myelination assay for DNT. Here, we summarize the routes of myelin toxicity and the known models to study this particular endpoint.