Integrated ATM, an integrated accelerated testing methodology for CFRP durability, is described herein. It is expressed as a single formula including several parameters representing the life of CFRP ...under an arbitrary environmental temperature and an arbitrary strain ratio R from R = 0 to 1. Integrated ATM is based on Christensen's viscoelastic crack kinetics and conventional ATM. First, Integrated ATM is introduced based on the matrix resin viscoelasticity. Second, important parameters which affect CFRP life are found for the longitudinal tensile strength of a unidirectional CFRP as an Integrated ATM application. Finally, the parameter influences on CFRP life are assessed.
•Algorithms in computer vision are customized and complimented for concrete CT.•3D information of 1442 aggregates is extracted from the bulk concrete.•3D percolation appears in the aggregates while ...they are isolated in 2D.•The heterogeneous modelling with individual aggregate control is 3D printable.•The presented work enables 3D concrete modelling with realistic aggregates input.
A computational sequence to extract individual 3D aggregate from computed tomography (CT) scans of bulk concrete is presented in this paper. The technique adopted a combination of computer vision algorithms customized for concrete composite along with the originally proposed algorithm. The computational technique efficiently establishes a database of 1442 aggregates with individual 3D printable volume and surface. The presented work can be used to perform heterogeneous modelling with realistic aggregates input and distributive analysis of a specific constituent of interest. The necessity of customizing the computer vision in the application of concrete composite is discussed with evidenced digital damage in the non-destructive measurement. A comparison with available reconstruction methods with feasibility is performed to demonstrate that characterizing concrete composite with detailed information of each constituent can provide a more realistic representation of the composite. The vision of digital concrete with realistic aggregates input is further demonstrated with the example application.
This research presents the development and application of a constructability design model, which determines the print speed and filament layer height combination that yields the fastest vertical ...building rate, whilst ensuring for the successful construction of an object. A 3D concrete printed structural wall element is used to validate the model. High variation in material rheological properties lead to an over prediction by the model if mean model parameter values are used. Consequently, a probabilistic design model is developed to reduce the impact of high variation in material properties on the accuracy of the deterministic design model. The first-order reliability method (FORM) is applied and material partial factors derived.
Steady technological advances and recent milestones such as intercontinental quantum communication and the first implementation of medium-scale quantum networks are paving the way for the ...establishment of the quantum internet, a network of nodes interconnected by quantum channels. Here we build upon recent models for quantum networks based on optical fibers by considering the effect of a non-uniform distribution of nodes, more specifically based on the demographic data of the federal states in Brazil. We not only compute the statistical properties of this more realistic network, comparing its features with previous models but also employ it to compute the repetition rates for entanglement swapping, an essential protocol for quantum communication based on quantum repeaters.
•Developed a quantum network model using optical fibers, incorporating the impact of a non-uniform node distribution.•Analyzed the statistical properties of such quantum network, leveraging demographic and geographical data from Brazil.•Applied the developed model to compute repetition rates for entanglement swapping.
ABSTRACT
The James Webb Space Telescope (JWST) is expected to observe galaxies at z > 10 that are presently inaccessible. Here, we use a self-consistent empirical model, the universemachine, to ...generate mock galaxy catalogues and light-cones over the redshift range z = 0−15. These data include realistic galaxy properties (stellar masses, star formation rates, and UV luminosities), galaxy–halo relationships, and galaxy–galaxy clustering. Mock observables are also provided for different model parameters spanning observational uncertainties at z < 10. We predict that Cycle 1 JWST surveys will very likely detect galaxies with M* > 107 M⊙ and/or M1500 < −17 out to at least z ∼ 13.5. Number density uncertainties at z > 12 expand dramatically, so efforts to detect z > 12 galaxies will provide the most valuable constraints on galaxy formation models. The faint-end slopes of the stellar mass/luminosity functions at a given mass/luminosity threshold steepen as redshift increases. This is because observable galaxies are hosted by haloes in the exponentially falling regime of the halo mass function at high redshifts. Hence, these faint-end slopes are robustly predicted to become shallower below current observable limits (M* < 107 M⊙ or M1500 > −17). For reionization models, extrapolating luminosity functions with a constant faint-end slope from M1500 = −17 down to M1500 = −12 gives the most reasonable upper limit for the total UV luminosity and cosmic star formation rate up to z ∼ 12. We compare to three other empirical models and one semi-analytic model, showing that the range of predicted observables from our approach encompasses predictions from other techniques. Public catalogues and light-cones for common fields are available online.
This communication addresses some common misconceptions about weakest link theory and Weibull statistics as they pertain to strength distributions of brittle fibers. After describing the nature of ...the problem, the flaws in ensuing proposed models of strength distributions are highlighted and discussed. A way forward that obviates the problems is suggested.
•CNT network size and interphase in nanomembranes is a crack arrestor mechanism.•Pre-infused BP has a large interphase thickness to network size ratio than a dry BP.•Weibull analysis shows a robust ...entangled structure of CNTs in pre-infused BP.•Pre-infused BP enhances multimodal fracture toughness in polymer composites.•A robust CNT network/interphase causes crack deflections in BP membranes.
Delamination is one of the primary damage mechanisms in laminated composites that affects the lifetime integrity of the composite systems. The present study examines the effects of Buckypaper (BP) nano-reinforcement on the multimodal interlaminar fracture properties of the carbon fiber reinforced polymer (CFRP) composites reinforced with BP membranes. The novelty and the focus of this study is the effects of multi-wall carbon nanotube (MWCNT) network size and interphase in the interlaminar fracture toughness of BP nanocomposites. Dry and pre-infused non-functionalized MWCNT BP were integrated at the mid layer of the composite before laminate curing. The Atomic Force Microscopy Peak Force Quantitative Nanomechanics Mapping (AFM PFQNM) technique and the Weibull model were applied to study the effects of random CNT network size and interphase on mode I, II, and mixed-mode I-II fracture properties. Weibull analysis of 150 dry and pre-infused BP data confirms a high scatter of CNT network size in dry BP compared to pre-infused BP. Weibull modulus of CNT network size (2.33 for large size and 5.18 for small size) and interphase thickness (3.46) show a robust, consistent entangled structure of CNTs in pre-infused BP. Dry BP improves the propagation energy release rate of the CFRP laminates up to 54%. However, it does not provide additional toughening in the initiation crack energy for modes I, II, and I-II. Mode I initiation and propagation, mode II initiation and mixed-mode I/II fracture energy of pre-infused BP nanocomposites are ∼33%, ∼52.5%, ∼14%, and ∼28% higher than the reference. The interlaminar crack either moves exclusively through the BP layer by circumventing the rigid CNT networks or moves forward through the interface of the carbon fiber monofilament above and below the reinforcing BP layer. This saw-like interlaminar crack path triggers numerous toughening mechanisms such as nano-toughened epoxy, nanoscale CNT bridging and pullout, and crack deflection, and it increases the total crack propagation path.
Display omitted
This paper introduces an innovative approach for the efficient analysis of composites manufacturing processes and phenomena. The method combines low- and high-fidelity simulation schemes with limited ...amounts of experimental data to train surrogate machine learning (ML) models. Guided by a novel approach, Spatially Weighted Gaussian Process Regression (SWGPR), a predictive model is efficiently constructed and calibrated by assigning datapoint-dependent noise levels to simulation points, establishing a multi-scale data-driven uncertainty structure. This study demonstrates the effectiveness of the method in accurately predicting process-induced deformations (PIDs) for L-shaped cross-ply laminates using minimal experimental efforts. The presented method aims to provide a cost-effective and broadly applicable framework for understanding and improving the design, development, and manufacturing of composites.
In the period 1991-2015, algorithmic advances in Mixed Integer Optimization (MIO) coupled with hardware improvements have resulted in an astonishing 450 billion factor speedup in solving MIO ...problems. We present a MIO approach for solving the classical best subset selection problem of choosing k out of p features in linear regression given n observations. We develop a discrete extension of modern first-order continuous optimization methods to find high quality feasible solutions that we use as warm starts to a MIO solver that finds provably optimal solutions. The resulting algorithm (a) provides a solution with a guarantee on its suboptimality even if we terminate the algorithm early, (b) can accommodate side constraints on the coefficients of the linear regression and (c) extends to finding best subset solutions for the least absolute deviation loss function. Using a wide variety of synthetic and real datasets, we demonstrate that our approach solves problems with n in the 1000s and p in the 100s in minutes to provable optimality, and finds near optimal solutions for n in the 100s and p in the 1000s in minutes. We also establish via numerical experiments that the MIO approach performs better than Lasso and other popularly used sparse learning procedures, in terms of achieving sparse solutions with good predictive power.
Discovering dynamic characteristics in traffic flow is the significant step to design effective traffic managing and controlling strategy for relieving traffic congestion in urban cities. A new ...method based on complex network theory is proposed to study multivariate traffic flow time series. The data were collected from loop detectors on freeway during a year. In order to construct complex network from original traffic flow, a weighted Froenius norm is adopt to estimate similarity between multivariate time series, and Principal Component Analysis is implemented to determine the weights. We discuss how to select optimal critical threshold for networks at different hour in term of cumulative probability distribution of degree. Furthermore, two statistical properties of networks: normalized network structure entropy and cumulative probability of degree, are utilized to explore hourly variation in traffic flow. The results demonstrate these two statistical quantities express similar pattern to traffic flow parameters with morning and evening peak hours. Accordingly, we detect three traffic states: trough, peak and transitional hours, according to the correlation between two aforementioned properties. The classifying results of states can actually represent hourly fluctuation in traffic flow by analyzing annual average hourly values of traffic volume, occupancy and speed in corresponding hours.
•We construct network from multivariate traffic flow time series.•A weighted Froenius norm is adopt to estimate similarity between multivariate time series.•Principal Component Analysis is implemented to determine the weights.•We analyzed normalized network structure entropy and cumulative probability of degree.•We classify traffic state according to above two properties.