Artificial intelligence (AI) and machine learning (ML) have been increasingly used in materials science to build predictive models and accelerate discovery. For selected properties, availability of ...large databases has also facilitated application of deep learning (DL) and transfer learning (TL). However, unavailability of large datasets for a majority of properties prohibits widespread application of DL/TL. We present a cross-property deep-transfer-learning framework that leverages models trained on large datasets to build models on small datasets of different properties. We test the proposed framework on 39 computational and two experimental datasets and find that the TL models with only elemental fractions as input outperform ML/DL models trained from scratch even when they are allowed to use physical attributes as input, for 27/39 (≈ 69%) computational and both the experimental datasets. We believe that the proposed framework can be widely useful to tackle the small data challenge in applying AI/ML in materials science.
The application of machine learning (ML) techniques in materials science has attracted significant attention in recent years, due to their impressive ability to efficiently extract data-driven ...linkages from various input materials representations to their output properties. While the application of traditional ML techniques has become quite ubiquitous, there have been limited applications of more advanced deep learning (DL) techniques, primarily because big materials datasets are relatively rare. Given the demonstrated potential and advantages of DL and the increasing availability of big materials datasets, it is attractive to go for deeper neural networks in a bid to boost model performance, but in reality, it leads to performance degradation due to the vanishing gradient problem. In this paper, we address the question of how to enable deeper learning for cases where big materials data is available. Here, we present a general deep learning framework based on Individual Residual learning (IRNet) composed of very deep neural networks that can work with any vector-based materials representation as input to build accurate property prediction models. We find that the proposed IRNet models can not only successfully alleviate the vanishing gradient problem and enable deeper learning, but also lead to significantly (up to 47%) better model accuracy as compared to plain deep neural networks and traditional ML techniques for a given input materials representation in the presence of big data.
The present study focused on the green and sustainable synthesis of zinc oxide (ZnO) quantum dots (QDs) using zinc acetate (precursor) and Eclipta alba leaf extract as a reducing agent. The synthesis ...of ZnO QDs was monitored by ultraviolet–visible absorption spectroscopy at wavelength (λmax) 324 nm. The optimal synthesis of ZnO QDs was recorded at temperature 40 °C, pH 7, 5 mL zinc acetate (5 mM), 7 mL leaf extract and reaction time of 75 min. The transmission electron microscopy (TEM) depicted homogeneous distribution of spherical ZnO QDs with mean particle size of 6 nm that comparable to biomolecules. The selected area electron diffraction (SAED) analysis revealed crystalline nature of ZnO QDs having a hexagonal wurtzite phase with lattice constants a = b = 0.32 nm and c = 0.52 nm. Furthermore, the physical interactions between ZnO QDs and E. coli cells were studied by TEM and agar well diffusion methods that showed enhanced antimicrobial activity. Overall, these unique size and quite stable QDs open up possibilities of applications in a number of commercial consumers, clinical products and fluorescence labeling including the antimicrobial agent.
Display omitted
•ZnO quantum dots (QDs) were grown via green chemistry technique within 5 min.•ZnO QDs depicted ∼6 nm uniform spherical shape and stable at room temperature.•ZnO QDs revealed excellent antimicrobial activity over bulk form of zinc acetate dihydrate.
Abstract
While experiments and DFT-computations have been the primary means for understanding the chemical and physical properties of crystalline materials, experiments are expensive and ...DFT-computations are time-consuming and have significant discrepancies against experiments. Currently, predictive modeling based on DFT-computations have provided a rapid screening method for materials candidates for further DFT-computations and experiments; however, such models inherit the large discrepancies from the DFT-based training data. Here, we demonstrate how AI can be leveraged together with DFT to compute materials properties more accurately than DFT itself by focusing on the critical materials science task of predicting “formation energy of a material given its structure and composition”. On an experimental hold-out test set containing 137 entries, AI can predict formation energy from materials structure and composition with a mean absolute error (MAE) of 0.064 eV/atom; comparing this against DFT-computations, we find that AI can significantly outperform DFT computations for the same task (discrepancies of
$$>0.076$$
>
0.076
eV/atom) for the first time.
Modern machine learning (ML) and deep learning (DL) techniques using high-dimensional data representations have helped accelerate the materials discovery process by efficiently detecting hidden ...patterns in existing datasets and linking input representations to output properties for a better understanding of the scientific phenomenon. While a deep neural network comprised of fully connected layers has been widely used for materials property prediction, simply creating a deeper model with a large number of layers often faces with vanishing gradient problem, causing a degradation in the performance, thereby limiting usage. In this paper, we study and propose architectural principles to address the question of improving the performance of model training and inference under fixed parametric constraints. Here, we present a general deep-learning framework based on branched residual learning (BRNet) with fully connected layers that can work with any numerical vector-based representation as input to build accurate models to predict materials properties. We perform model training for materials properties using numerical vectors representing different composition-based attributes of the respective materials and compare the performance of the proposed models against traditional ML and existing DL architectures. We find that the proposed models are significantly more accurate than the ML/DL models for all data sizes by using different composition-based attributes as input. Further, branched learning requires fewer parameters and results in faster model training due to better convergence during the training phase than existing neural networks, thereby efficiently building accurate models for predicting materials properties.
With the ever increasing demand and stressed operating conditions, resource expansion is the only way to have sustainable electric grid. Transmission system expansion is one of the important aspects ...in this regard. In the recent years, expansion problem has been addressed by several researchers. Meta-heuristic techniques have been applied to solve expansion problems. In this paper, a new variant of Teaching Learning Based Optimization (TLBO) Algorithm is proposed by adding a sine function based diversity in the teaching phase. The proposed variant is named as Composite TLBO (C-TLBO). The efficacy of the proposed variant has been evaluated on standard benchmark functions and then it is evaluated on two standard electrical networks with cases of inclusion of uncertainty and demand burst. The results obtained from optimization processes have been evaluated with the help of several analytical and statistical tests. Results affirm that the proposed modification enhances the performance of the algorithm in a substantial manner.
Modern data mining techniques using machine learning (ML) and deep learning (DL) algorithms have been shown to excel in the regression-based task of materials property prediction using various ...materials representations. In an attempt to improve the predictive performance of the deep neural network model, researchers have tried to add more layers as well as develop new architectural components to create sophisticated and deep neural network models that can aid in the training process and improve the predictive ability of the final model. However, usually, these modifications require a lot of computational resources, thereby further increasing the already large model training time, which is often not feasible, thereby limiting usage for most researchers. In this paper, we study and propose a deep neural network framework for regression-based problems comprising of fully connected layers that can work with any numerical vector-based materials representations as model input. We present a novel deep regression neural network, iBRNet, with branched skip connections and multiple schedulers, which can reduce the number of parameters used to construct the model, improve the accuracy, and decrease the training time of the predictive model. We perform the model training using composition-based numerical vectors representing the elemental fractions of the respective materials and compare their performance against other traditional ML and several known DL architectures. Using multiple datasets with varying data sizes for training and testing, We show that the proposed iBRNet models outperform the state-of-the-art ML and DL models for all data sizes. We also show that the branched structure and usage of multiple schedulers lead to fewer parameters and faster model training time with better convergence than other neural networks. Scientific contribution: The combination of multiple callback functions in deep neural networks minimizes training time and maximizes accuracy in a controlled computational environment with parametric constraints for the task of materials property prediction.
X-linked agammaglobulinemia (XLA, OMIM #300755) is a primary immunodeficiency disorder caused by pathogenic variations in the BTK gene, characterized by failure of development and maturation of B ...lymphocytes. The estimated prevalence worldwide is 1 in 190,000 male births. Recently, genome sequencing has been widely used in difficult to diagnose and familial cases. We report a large Indian family suffering from XLA with five affected individuals. We performed complete blood count, immunoglobulin assay, and lymphocyte subset analysis for all patients and analyzed Btk expression for one patient and his mother. Whole exome sequencing (WES) for four patients, and whole genome sequencing (WGS) for two patients have been performed. Carrier screening was done for 17 family members using Multiplex Ligation-dependent Probe Amplification (MLPA) and haplotype ancestry mapping using fineSTRUCTURE was performed. All patients had hypogammaglobulinemia and low CD19+ B cells. One patient who underwent Btk estimation had low expression and his mother showed a mosaic pattern. We could not identify any single nucleotide variants or small insertion/ deletions from the WES dataset that correlates with the clinical feature of the patient. Structural variant analysis through WGS data identifies a novel large deletion of 5,296 bp at loci chrX:100,624,323-100,629,619 encompassing exons 3-5 of the BTK gene. Family screening revealed seven carriers for the deletion. Two patients had a successful HSCT. Haplotype mapping revealed a South Asian ancestry. WGS led to identification of the accurate genetic mutation which could help in early diagnosis leading to improved outcomes, prevention of permanent organ damage and improved quality of life, as well as enabling genetic counselling and prenatal diagnosis in the family.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Water is a natural and essential resource for humans, animals, and plants to persist. However, only ⁓2.5 % of the freshwater resources are available, while the remaining ⁓97.5 % is saline water, ...which is unsuitable for humanity. According to the WHO, water scarcity will worsen by 2050. As a result, numerous researchers, scientists, and engineers are working in this field to improve water resources with advanced treatment technologies. Aside from the multiple water resources, desalination is critical in converting saline water to fresh water. In line with a recent update from the International Desalination Association (IDA, Reuse Handbook 2022–23), approximately ⁓22,757 desalination plants are operating worldwide, providing ⁓107.95 million cubic meters of freshwater per day (m3/day). Furthermore, in this digital age, artificial intelligence (AI) techniques, such as gray wolf optimization (GWO), sine cosine algorithm (SCA), artificial neural networks (ANN), multi-verse optimizer (MVO), fuzzy logic systems (FLS), moth flame optimizer (MFO), particle swarm optimization (PSO), artificial hummingbird algorithm (AHA) and genetic algorithms (GA), are playing a vital role and capable of deep analysis of real-time desalination plant for saving time, energy, human efforts, and money. This study focuses on the critical review and various aspects of current-age PSO-ANN techniques for desalination plants. In this regard, recent datasets of the Web of Science (WoS), provided by Clarivate Analytics, state that about >54,856 records (1965–2023) of desalination and around > 180 records (2008–2023) of PSO-ANN techniques are available globally. These records involve research articles, reviews, proceedings, letters, books, chapters, and editorial materials. Finally, this review article is specific and analyzes the various perspectives of PSO-ANN techniques in the water desalination process, promoting plant engineers and researchers to improve plant performance with minimum effort and time.
Electric vehicles (EVs) are on the path to becoming a solution to the emissions released by the internal combustion engine vehicles that are on the road. EV charging management integration requires a ...smart grid platform that allows for communication and control between the aggregator, consumer and grid. This study presents an operational strategy for PV‐assisted charging stations (PVCSs) that allows the EV to be charged primarily by PV energy, followed by the EV station's battery storage (BS) and the grid. Multi‐Aggregator collaborative scheduling is considered that includes a monetary penalty on the aggregator for any unscheduled EVs. The impact of the PVCS is compared to the case with no PV/BS is included. A variation in the PV profile is included in the evaluation to assess its impact on total profits. Profit results are compared in cases of minimum, average and maximum PV energy output. The results indicate that the inclusion of penalties due to unscheduled EVs resulted in lowered profits. Further, the profits experienced an increase as the number of EVs scheduled through PV/BS increased, implying that a lesser percentage of EVs are scheduled by the grid when a greater amount of PV and battery energy are available.