Topology optimization has emerged as a popular approach to refine a component’s design and increase its performance. However, current state-of-the-art topology optimization frameworks are ...compute-intensive, mainly due to multiple finite element analysis iterations required to evaluate the component’s performance during the optimization process. Recently, machine learning (ML)-based topology optimization methods have been explored by researchers to alleviate this issue. However, previous ML approaches have mainly been demonstrated on simple two-dimensional applications with low-resolution geometry. Further, current methods are based on a single ML model for end-to-end prediction, which requires a large dataset for training. These challenges make it non-trivial to extend current approaches to higher resolutions. In this paper, we develop deep learning-based frameworks consistent with traditional topology optimization algorithms for 3D topology optimization with a reasonably fine (high) resolution. We achieve this by training multiple networks, each learning a different step of the overall topology optimization methodology, making the framework more consistent with the topology optimization algorithm. We demonstrate the application of our framework on both 2D and 3D geometries. The results show that our approach predicts the final optimized design better (5.76× reduction in total compliance MSE in 2D; 2.03× reduction in total compliance MSE in 3D) than current ML-based topology optimization methods.
•A deep learning framework consistent with SIMP algorithm for topology optimization.•Framework scalable to high resolution in 3D structural topology optimization.•Uses intermediate density and compliance to improve the final topology prediction.•Framework results validated using ground truth SIMP-based topology optimization.•More than 5x reduction in error compared to a single density-based ML model.
Recent advances in generative modeling, namely Diffusion models, have revolutionized generative modeling, enabling high-quality image generation tailored to user needs. This paper proposes a ...framework for the generative design of structural components. Specifically, we employ a Latent Diffusion model to generate potential designs of a component that can satisfy a set of problem-specific loading conditions. One of the distinct advantages our approach offers over other generative approaches is the editing of existing designs. We train our model using a dataset of geometries obtained from structural topology optimization utilizing the SIMP algorithm. Consequently, our framework generates inherently near-optimal designs. Our work presents quantitative results that support the structural performance of the generated designs and the variability in potential candidate designs. Furthermore, we provide evidence of the scalability of our framework by operating over voxel domains with resolutions varying from 323 to 1283. Our framework can be used as a starting point for generating novel near-optimal designs similar to topology-optimized designs.
•Latent diffusion model for generating 3D structural component designs.•Framework for generating component designs consistent with topology optimization.•Generated designs have similar (near-optimal) strain energy to SIMP designs.•Large scale 3D voxel dataset for structural topology optimization.
Structural topology optimization is a compute-intensive process due to several iterations of simulations required to evaluate the performance of the component during optimization. Deep learning (DL) ...based approaches can address this challenge, but these methods were demonstrated mainly using 2D shapes and, at best, in low-resolution 3D geometries (typically 323). Further, due to non-manufacturable geometric features, the predicted optimal geometries from DL may not be manufacturable, even using additive manufacturing. In this paper, we develop a DL framework using a multigrid convolutional neural network (CNN) to generate high-resolution topology-optimized 3D geometries with additional checks on the manufacturability of the predicted shapes. Our framework predicts the final optimal topology using the initial strain energy (objective function of structural topology optimization) and target volume fraction (material fraction to be preserved after optimization) as input. We train the network using a multigrid approach, which enables topology optimization at 1283 resolution, which was previously computationally challenging. We first train the multigrid CNN at a lower resolution and then transfer the learned network to continue training at higher resolutions. We use a distributed deep learning framework on a GPU supercomputing cluster to further speed up the training time. Distributed DL significantly speeds up the training time by more than 4× while achieving similar model performance. Finally, we check the optimal geometries for manufacturability using fused deposition modeling (FDM)-specific manufacturability constraints. The large training dataset (>60,000 high-resolution topology optimization examples) will be released with the paper to enable further research on this topic.
•Multigrid approach to train the neural network at fine resolution of 1283.•Demonstrate the manufacturability by 3D printing several sample models predicted our framework.•A data-parallel distributed deep learning framework to accelerate the training process.•A comprehensive high-resolution data set consisting of more than 60k optimal shapes.
Neural network-based approaches for solving partial differential equations (PDEs) have recently received special attention. However, most neural PDE solvers only apply to rectilinear domains and do ...not systematically address the imposition of boundary conditions over irregular domain boundaries. In this paper, we present a neural framework to solve partial differential equations over domains with irregularly shaped (non-rectilinear) geometric boundaries. Given the shape of the domain as an input (represented as a binary mask), our network is able to predict the solution field, and can generalize to novel (unseen) irregular domains; the key technical ingredient to realizing this model is a physics-informed loss function that directly incorporates the interior-exterior information of the geometry. We also perform a careful error analysis which reveals theoretical insights into several sources of error incurred in the model-building process. Finally, we showcase various applications in 2D and 3D, along with favorable comparisons with ground truth solutions.
•Irregular boundary network (IBN) predicts field PDE solution over arbitrary domains.•PDE loss to learn watertight boundary conditions imposed by complex geometries.•Single trained IBN can produce solutions to a PDE across different arbitrary shapes.•Analysis of convergence and generalization error bounds of the PDE-based loss.•Illustrate the approach on Poisson’s and Navier–Stokes PDE for different geometries.
Recent advances in generative modeling, namely Diffusion models, have revolutionized generative modeling, enabling high-quality image generation tailored to user needs. This paper proposes a ...framework for the generative design of structural components. Specifically, we employ a Latent Diffusion model to generate potential designs of a component that can satisfy a set of problem-specific loading conditions. One of the distinct advantages our approach offers over other generative approaches, such as generative adversarial networks (GANs), is that it permits the editing of existing designs. We train our model using a dataset of geometries obtained from structural topology optimization utilizing the SIMP algorithm. Consequently, our framework generates inherently near-optimal designs. Our work presents quantitative results that support the structural performance of the generated designs and the variability in potential candidate designs. Furthermore, we provide evidence of the scalability of our framework by operating over voxel domains with resolutions varying from \(32^3\) to \(128^3\). Our framework can be used as a starting point for generating novel near-optimal designs similar to topology-optimized designs.
Topology optimization has emerged as a popular approach to refine a component's design and increase its performance. However, current state-of-the-art topology optimization frameworks are ...compute-intensive, mainly due to multiple finite element analysis iterations required to evaluate the component's performance during the optimization process. Recently, machine learning (ML)-based topology optimization methods have been explored by researchers to alleviate this issue. However, previous ML approaches have mainly been demonstrated on simple two-dimensional applications with low-resolution geometry. Further, current methods are based on a single ML model for end-to-end prediction, which requires a large dataset for training. These challenges make it non-trivial to extend current approaches to higher resolutions. In this paper, we develop deep learning-based frameworks consistent with traditional topology optimization algorithms for 3D topology optimization with a reasonably fine (high) resolution. We achieve this by training multiple networks, each learning a different step of the overall topology optimization methodology, making the framework more consistent with the topology optimization algorithm. We demonstrate the application of our framework on both 2D and 3D geometries. The results show that our approach predicts the final optimized design better (5.76x reduction in total compliance MSE in 2D; 2.03x reduction in total compliance MSE in 3D) than current ML-based topology optimization methods.
Neural network-based approaches for solving partial differential equations (PDEs) have recently received special attention. However, the large majority of neural PDE solvers only apply to rectilinear ...domains, and do not systematically address the imposition of Dirichlet/Neumann boundary conditions over irregular domain boundaries. In this paper, we present a framework to neurally solve partial differential equations over domains with irregularly shaped (non-rectilinear) geometric boundaries. Our network takes in the shape of the domain as an input (represented using an unstructured point cloud, or any other parametric representation such as Non-Uniform Rational B-Splines) and is able to generalize to novel (unseen) irregular domains; the key technical ingredient to realizing this model is a novel approach for identifying the interior and exterior of the computational grid in a differentiable manner. We also perform a careful error analysis which reveals theoretical insights into several sources of error incurred in the model-building process. Finally, we showcase a wide variety of applications, along with favorable comparisons with ground truth solutions.
Decentralized learning enables a group of collaborative agents to learn models using a distributed dataset without the need for a central parameter server. Recently, decentralized learning algorithms ...have demonstrated state-of-the-art results on benchmark data sets, comparable with centralized algorithms. However, the key assumption to achieve competitive performance is that the data is independently and identically distributed (IID) among the agents which, in real-life applications, is often not applicable. Inspired by ideas from continual learning, we propose Cross-Gradient Aggregation (CGA), a novel decentralized learning algorithm where (i) each agent aggregates cross-gradient information, i.e., derivatives of its model with respect to its neighbors' datasets, and (ii) updates its model using a projected gradient based on quadratic programming (QP). We theoretically analyze the convergence characteristics of CGA and demonstrate its efficiency on non-IID data distributions sampled from the MNIST and CIFAR-10 datasets. Our empirical comparisons show superior learning performance of CGA over existing state-of-the-art decentralized learning algorithms, as well as maintaining the improved performance under information compression to reduce peer-to-peer communication overhead. The code is available here on GitHub.
Abstract
We validated 3 distinct hiPSC-CM cell lines—each of different purity and a voltage sensitive dye (VSD)-based high-throughput proarrhythmia screening assay as a noncore site in the recently ...completed CiPA Myocyte Phase II Validation Study. Blinded validation was performed using 12 drugs linked to low, intermediate, or high risk for causing Torsades de Pointes (TdP). Commercially sourced hiPSC-CMs were obtained either from Cellular Dynamics International (CDI, Madison, Wisconsin, iCell Cardiomyoyctes2) or Takara Bio (CLS, Cellartis Cardiomyocytes). A third hiPSC-CM cell line (MCH, Michigan) was generated in house. Each cell type had distinct baseline electrophysiological function (spontaneous beat rate, action potential duration, and conduction velocity) and drug responsiveness. Use of VSD and optical mapping enabled the detection of conduction slowing of sodium channel blockers (quinidine, disopyramide, and mexiletine) and drug-induced TdP-like activation patterns (rotors) for some high- and intermediate-risk compounds. Low-risk compounds did not induce rotors in any cell type tested. These results further validate the utility of hiPSC-CMs for predictive proarrhythmia screening and the utility of VSD technology to detect drug-induced APD prolongation, arrhythmias (rotors), and conduction slowing. Importantly, results indicate that different ratios of cardiomyocytes and noncardiomyocytes have important impact on drug response that may be considered during risk assessment of new drugs. Finally, we present the first blinded CiPA hiPSC-CM validation results to simultaneously detect drug-induced conduction slowing, action potential duration prolongation, action potential triangulation, and drug-induced rotors in a proarrhythmia assay.