Résumé : Selon Roy Claude (1995 : 19), il existe deux formes de communication humaine : verbale et non verbale. La communication verbale fait l’usage de la parole et la communication non verbale ...n’utilise pas la parole dans la transmission d’un message. Le silence est une partie de la communication non verbale. Cette étude a révélé que le silence peut avoir des aspects négatifs et positifs dans la communication des humains. Dans le champ des sciences de l’information et de la communication, le silence est une valeur-refuge, garante de l’écoute et du respect c’est-à-dire de l’altruisme et de l’humanisme.
•An improved AK-MCS method is proposed to estimate the small failure probability.•An optimal β-sphere is searched without any extra model evaluations.•Kriging model is ceaselessly updated layer by ...layer outside the current β-sphere.•The adaptive Kriging model is finished until an optimal β-sphere is founded.•The candidate sampling pool in the proposed algorithm is remarkably reduced.
The pivotal problem in reliability analysis is how to use a smaller number of model evaluations to get more accurate failure probabilities. To achieve this aim, an iterative method based on the Monte Carlo simulation and the adaptive Kriging (AK) model (abbreviated as AK-MCS) has been proposed in 2011 by Echard et al. But for small failure probability, the number of the candidate points is extremely large for convergent solution. These points need to be evaluated by the current Kriging model to select the best next sample for updating the Kriging model in AK-MCS method, and the large candidate points will make the adaptive updating process of Kriging model much more time-consuming. Therefore, to improve the applicability of the AK-MCS method for small failure probability, the adaptive radial-based importance sampling (ARBIS) is employed to reduce the number of candidate points in the AK-MCS method, and an ARBIS combined with AK model method (abbreviated as AK-ARBIS) is proposed. The idea of the ARBIS is adaptively to find the optimal β-sphere, i.e., the largest sphere of the safe domain, and then samples inside the optimal β-sphere is directly recognized as safety and do not need to call the true limit state function to judge their states (safe or failed). During the adaptive process of finding the optimal β-sphere, the Kriging model is ceaselessly updated layer after layer based on the U learning scheme in each sampling pool which only contains the samples between the current spherical rings. The updating process of Kriging model stops until the optimal β-sphere is adaptively found and the convergent condition is satisfied. By finding the optimal β-sphere, the total number of candidate samples is reduced which only includes the samples outside the optimal β-sphere. Besides, the whole candidate sampling pool is partitioned into several sub-candidate sampling pool sequentially. The proposed method not only inherits the advantage of the AK-MCS but also reduces the reliability analysis time of the AK-MCS from two aspects. One is the size reduction of the candidate sampling pool, the other is the reduction of the actual limit state function evaluations because the sampling points locating inside the adaptively searched optimal β-sphere do not need to participate in the training process. By analyzing a highly nonlinear numerical case, a non-linear oscillator system, a simplified wing box structural model, an aero-engine turbine disk and a planar ten-bar structure, the effectiveness and the accuracy of the proposed AK-ARBIS method for estimating the small failure probability are verified.
•Compressive and flexural strengths of SFRC are successfully predicted by machine learning algorithms.•Tree-based and boosting models are recommended for SFRC predictions.•W/C ratio and silica fume ...are most important parameters of predicting compressive strength.•Fiber volume fraction and silica fume are the most important for predicting flexural strength.•XGBoost and gradient boost regressors are selected as the most appropriate machine learning algorithms of SFRC.
Steel fiber-reinforced concrete (SFRC) has a performance superior to that of normal concrete because of the addition of discontinuous fibers. The development of strengths prediction technique of SFRC is, however, still in its infancy compared to that of normal concrete because of its complexity and limited available data. To overcome this limitation, research was conducted to develop an optimum machine learning algorithm for predicting the compressive and flexural strengths of SFRC. The resulting feature impact was also analyzed to confirm the reliability of the models. To achieve this, compressive and flexural strengths data from SFRC were collected through extensive literature reviews, and a database was created. Eleven machine learning algorithms were then established based on the dataset. K-fold validation was conducted to prevent overfitting, and the algorithms were regulated. The boosting- and tree-based models had the optimal performance, whereas the K-nearest neighbor, linear, ridge, lasso regressor, support vector regressor, and multilayer perceptron models had the worst performance. The water-to-cement ratio and silica fume content were the most influential factors in the prediction of compressive strength of SFRC, whereas the silica fume and fiber volume fraction most strongly influenced the flexural strength. Finally, it was found that, in general, the compressive strength prediction performance was better than the flexural strength prediction performance, regardless of the machine learning algorithm.
There was always a strong connection between the child’s growth and his development. The teacher made this connection possible through patience, trust, endless monitoring, the discovery of innate ...abilities, and their exploitation. We chose the tools for improving the motor qualities in the growth and development of the morpho-functional indices at the age of 11-13 years according to the existing material base and to the particularities of the middle schooler. We can conclude that games are precious educational tools and teachers should use them more and more during their classes due to their strong positive effect on pupils’ personality and development; motor skills in the growth and development of morpho-functional indices at the age of 11-13 play a decisive role in our students’ harmonious development and integration into society.
One key task for the planning and organizing of Inspection and Maintenance (I&M) works in an integrated energy system (IES) is to handle its weakest vulnerabilities, that is, when and where in this ...system the I&M actions or countermeasures ought to be performed. This requires a pinpoint spatiotemporal distribution prediction of these vulnerabilities, then more preparation times can be brought about for utilities to appropriately allocate the limited I&M manpower and equipment. With such a motivation, in light of both the internal and surrounding conditions, this paper proposes a prognosis framework, namely the Importance-Fuzzy High-utility Pattern Identification with lifetime-dependent factors (IFHPIlf). The IFHPIlf aims for the component-vulnerability patterns with higher profits rather than simply higher frequencies. In that case, when applied in imbalanced distributed databases, this framework can incorporate the High-Impact-Low-Probability (HILP) components as well, without the extra steps of separating and assessing rare and common components. For the purpose of assessing the utilities of each component and transaction, a parallel learning architecture is formulated to evaluate the corresponding two attributes, unit price and quantity, respectively. To estimate the unit price, the risk level of each component will be quantitatively rated via the built BP Time-dependent-lifetime Importance (BPTdlI) model, where the direct impacts of each component’s lifetime distribution on the reliability distribution of the whole system, as well as the entire underlying system failure-related hazard cut sets, are integrated. Ergo, the data dependence in temporal scales can be taken into account; To qualitatively differentiate the perilous components, the quantity will be calculated in the established Importance-Fuzzy High-utility Pattern Identification (IFHPI) model, wherein the time-dependent-lifetime important FIS are conjugated with the Fuzzy Interest-Utility Measures (FIUM). Hence, both the discrete and continuous features will be handled in the same entity, and the determinations can be straightforwardly based on their influences. At last, the flexibility and feasibility of this framework during applications are demonstrated in terms of an empirical case study.
Display omitted
•A prognosis framework of IES vulnerability spatiotemporal distribution is designed.•A high-utility pattern identification method is built for imbalanced databases.•Time-dependent-lifetime importance is assessed for temporal dependence among data.•Importance FIS-based interest-utility measures are formed for multi-type inputs.•An empirical case study validates its feasibility during applications.
Filter pruning has achieved remarkable success in reducing memory consumption and speeding up inference for convolutional neural networks (CNNs). Some prior works, such as heuristic methods, ...attempted to search for suitable sparse structures during the pruning process, which may be expensive and time-consuming. In this paper, an efficient cross-layer importance evaluation (CIE) method is proposed to automatically calculate proportional relationships among convolutional layers. Firstly, every layer is pruned separately by grid sampling way to obtain the accuracy of the model for all sampling points. And then, contribution matrices are built to describe the importance of each layer to model accuracy. Finally, the binary search algorithm is used to search the optimal sparse structure under a target pruned value. Extensive experiments on multiple representative image classification tasks demonstrate that proposed method acquires better compression performance under a little time cost compared to existing pruning algorithms. For instance, it reduces more than 50% FLOPs with only a small loss of 0.93% and 0.43% in the top-1 and top-5 accuracy for ResNet50, respectively. At the cost of only 0.24% accuracy loss, the pruned VGG19 model parameters are successfully compressed by 27.23× and the throughput has increased by 2.46×. On the whole, CIE has an excellent effect on the deployment and application of the CNNs model in edge device in terms of efficiency and accuracy.
•We develop an adaptive simulation method for reliability analysis based on sequential importance sampling.•The method samples a sequence of distributions that gradually approach the optimal ...importance sampling density.•We propose two MCMC algorithms for sampling the intermediate distributions.•We demonstrate accuracy and efficiency of the method in low and high dimensional component and system problems.
This paper proposes the application of sequential importance sampling (SIS) to the estimation of the probability of failure in structural reliability. SIS was developed originally in the statistical community for exploring posterior distributions and estimating normalizing constants in the context of Bayesian analysis. The basic idea of SIS is to gradually translate samples from the prior distribution to samples from the posterior distribution through a sequential reweighting operation. In the context of structural reliability, SIS can be applied to produce samples of an approximately optimal importance sampling density, which can then be used for estimating the sought probability. The transition of the samples is defined through the construction of a sequence of intermediate distributions. We present a particular choice of the intermediate distributions and discuss the properties of the derived algorithm. Moreover, we introduce two MCMC algorithms for application within the SIS procedure; one that is applicable to general problems with small to moderate number of random variables and one that is especially efficient for tackling high-dimensional problems.
•A random forest method is used to analyze travel mode choices for enhanced prediction capability.•We determined the optimal model parameters for a robust model specification.•We compared the model ...performance with other approaches within the travel mode choice context.•RF method has superior prediction accuracy, fast computation speed, and good interpretability.
The analysis of travel mode choice is important in transportation planning and policy-making in order to understand and forecast travel demands. Research in the field of machine learning has been exploring the use of random forest as a framework within which many traffic and transport problems can be investigated. The random forest (RF) is a powerful method for constructing an ensemble of random decision trees. It de-correlates the decision trees in the ensemble via randomization that leads to an improvement of forecasting and reduces the variance when averaged over the trees. However, the usefulness of RF for travel mode choice behavior remains largely unexplored. This paper proposes a robust random forest method to analyze travel mode choices for examining the prediction capability and model interpretability. Using the travel diary data from Nanjing, China in 2013, enriched with variables on the built environment, the effects of different model parameters on the prediction performance are investigated. The comparison results show that the random forest method performs significantly better in travel mode choice prediction for higher accuracy and less computation cost. In addition, the proposed method estimates the relative importance of explanatory variables and how they relate to mode choices. This is fundamental for a better understanding and effective modeling of people’s travel behavior.
Particle filters have been proven to be very effective for nonlinear/non-Gaussian systems. However, the great disadvantage of a particle filter is its particle degeneracy and sample impoverishment. ...An improved particle filter based on Pearson correlation coefficient (PPC) is proposed to reduce the disadvantage. The PPC is adopted to determine whether the particles are close to the true states. By resampling the particles in the prediction step, the new PF performs better than generic PF. Finally, some simulations are carried out to illustrate the effectiveness of the proposed filter.