A Software Life Cycle (SLC) is used to describe the phases of software cycle and ensure the good quality software is built. A Software Reliability Hierarchical Structure Modeling (SRHSM) is proposed ...for the analysis and allocation of the software reliability before it is used. A software system can be regard as hierarchical and composed of a set of interacting system elements which implement to fulfill its respective specified requirements. Based on the previous work, the concepts of SRSHM method is optimized and increased some basic elements. The SRHSM method comprises two processes in SLC, including Partition and Composition process and divides the system into four levels: System level, Subsystem level, Unit level, Code level. A software partition process is to allocate the software reliability to each module and composition process is applied for the analysis of software reliability.
Decentralized integration of distributed generation units and loads is increasing in the modern distribution grid. Maintaining the power flow and power quality within their accepted limits is a ...challenging task. The formation of meshed hybrid microgrids is an effective method to improve the power system. The smart transformer (ST) is a promising solution to establish meshed hybrid microgrids in the distribution system. This article analyzes the performance of an ST-based meshed hybrid microgrid interconnected to the main grid feeder through medium voltage (MV) dc-link of a second ST. The coordinated operation of interconnected ST system is proposed to explore the features of the configuration. During normal operation, the MVdc bus voltage is controlled by one ST, and this reduces the complexity of the overall control. The main grid and microgrid MVac source failure and converter fault conditions are explored to analyze the reliability of the proposed microgrid structure. Moreover, the reactive power support capability and active power losses for the proposed system are compared with the existing solutions. Simulation and experimental results are presented to show the operation of the proposed system.
This paper proposes a new methodology for automated design of power electronic systems realized through the use of artificial intelligence. Existing approaches do not consider the system's ...reliability as a performance metric or are limited to reliability evaluation for a certain fixed set of design parameters. The method proposed in this paper establishes a functional relationship between design parameters and reliability metrics, and uses them as the basis for optimal design. The first step in this new framework is to create a nonparametric surrogate model of the power converter that can quickly map the variables characterizing the operating conditions (e.g., ambient temperature and irradiation) and design parameters (e.g., switching frequency and dc link voltage) into variables characterizing the thermal stress of a converter (e.g., mean temperature and temperature variation of its devices). This step can be carried out by training a dedicated artificial neural network (ANN) either on experimental or simulation data. The resulting network is named as ANN 1 and can be deployed as an accurate surrogate converter model. This model can then be used to quickly map the yearly mission profile into a thermal stress profile of any selected device for a large set of design parameter values. The resulting data is then used to train ANN 2 , which becomes an overall system representation that explicitly maps the design parameters into a yearly lifetime consumption. To verify the proposed methodology, ANN 2 is deployed in conjunction with the standard converter design tools on an exemplary grid-connected PV converter case study. This study showed how to find the optimal balance between the reliability and output filter size in the system with respect to several design constraints. This paper is also accompanied by a comprehensive dataset that was used for training the ANNs.
The robustness and efficiency of performance measure approach (PMA) depend on the reliability loop in reliability-based design optimization (RBDO). For the reliability loop in the PMA using the ...minimum performance target point (MPTP) search, existing approaches can obtain stable results but may converge to inaccurate results, and higher computational efforts are required to achieve the optimum results for highly nonlinear problems. In this paper, a hybrid descent mean value (HDMV) approach is proposed based on a novel merit function, which is applied to combine the MPTP search formulas of the descent mean value (DMV) and advanced mean value (AMV). The merit function is used to adaptively control the numerical instability of the inverse reliability analysis for RBDO-based PMA. The accuracy, robustness and efficiency of the proposed DMV and HDMV methods are compared with existing methods through four nonlinear performance functions, two structural RBDO problems and a complex aircraft panel problem. The results illustrate that the DMV and HDMV methods are more robust, efficient and accurate than existing reliability methods. For the aircraft panel problem, a simultaneous buckling pattern is finally achieved by the proposed methods with better performance in terms of both convergence rate and computational efficiency.
•Hybrid descent mean value (HDMV) approach is proposed for reliability analysis and RBDO using PMA.•A merit function is applied to combine the descent mean value (DMV) and advanced mean value (AMV).•The accuracy, robustness and efficiency of HDMV are compared with several PMA-based reliability methods.•An aircraft panel problem is optimized using buckling probabilistic constraint to illustrate the performances of HDMV.•HDMV method provides robust and efficient results and shows a simultaneous buckling pattern for aircraft problem.
reIn the last decade, Bayesian networks (BNs) have been identified as a powerful tool for human reliability analysis (HRA), with multiple advantages over traditional HRA methods. In this paper we ...illustrate how BNs can be used to include additional, qualitative causal paths to provide traceability. The proposed framework provides the foundation to resolve several needs frequently expressed by the HRA community. First, the developed extended BN structure reflects the causal paths found in cognitive psychology literature, thereby addressing the need for causal traceability and strong scientific basis in HRA. Secondly, the use of node reduction algorithms allows the BN to be condensed to a level of detail at which quantification is as straightforward as the techniques used in existing HRA. We illustrate the framework by developing a BN version of the critical data misperceived crew failure mode in the IDHEAS HRA method, which is currently under development at the US NRC 45. We illustrate how the model could be quantified with a combination of expert-probabilities and information from operator performance databases such as SACADA. This paper lays the foundations necessary to expand the cognitive and quantitative foundations of HRA.
•A framework for building traceable BNs for HRA, based on cognitive causal paths.•A qualitative BN structure, directly showing these causal paths is developed.•Node reduction algorithms are used for making the BN structure quantifiable.•BN quantified through expert estimates and observed data (Bayesian updating).•The framework is illustrated for a crew failure mode of IDHEAS.
Modern multi-megawatt wind turbines are composed of slender, flexible, and lightly damped blades and towers. These components exhibit high susceptibility to wind-induced vibrations. As the size, ...flexibility and cost of the towers have increased in recent years, the need to protect these structures against damage induced by turbulent aerodynamic loading has become apparent. This paper combines structural dynamic models and probabilistic assessment tools to demonstrate improvements in structural reliability when modern wind turbine towers are equipped with active tuned mass dampers (ATMDs). This study proposes a multi-modal wind turbine model for wind turbine control design and analysis. This study incorporates an ATMD into the tower of this model. The model is subjected to stochastically generated wind loads of varying speeds to develop wind-induced probabilistic demand models for towers of modern multi-megawatt wind turbines under structural uncertainty. Numerical simulations have been carried out to ascertain the effectiveness of the active control system to improve the structural performance of the wind turbine and its reliability. The study constructs fragility curves, which illustrate reductions in the vulnerability of towers to wind loading owing to the inclusion of the damper. Results show that the active controller is successful in increasing the reliability of the tower responses. According to the analysis carried out in this paper, a strong reduction of the probability of exceeding a given displacement at the rated wind speed has been observed.
•Improvements in structural reliability when wind turbine towers equipped with ATMD.•Proposed controller is capable of remarkably improving the response of tower.•Fragility curves illustrate reductions in the vulnerability of tower due to ATMD.•Active controller is successful in increasing the reliability of the tower response.•Large reduction in the probability of exceeding a given displacement at rated speed.
With the emergence of the industrial internet of things, distributed data storage systems have become widely used to store the monitoring data of power generation systems. Malicious hackers often try ...to destroy or steal these confidential data by illegally invading the systems. In addition to hackers' illegal intrusions, the availability of the data stored is also affected by the internal failures of the system.In this research, two reliability models are formulated to study the reliability of a phased-mission distributed data storage system considering internal failures and illegal intrusion. The first model considers internal failures and data destruction, whereas the second model further considers data theft. Furthermore, the allocation of the data partitions is optimized so that system reliability can be maximized. Numerical experiments demonstrate the effectiveness of the proposed models and algorithms.
Identifying reliability high-correlated gates (HRCGs) is vital for fault location and exclusion, especially for cascading faults. By executing a linear fit based on the results of the circuit's ...reliability evaluation and calibrating the fit function using regression residual analysis, this brief first proves the existence of HRCGs. A time-series-oriented PCC model is then introduced to quantify gates' reliability correlation (GRC) and identify all the HRCGs in the circuit. Circuit-correlated primary outputs and sequential circuit-correlated flip-flops were further identified based on this approach. Experimental results on benchmark circuits show that the average accuracy of this approach is 0.9972 with the Monte Carlo (MC) method, and it is 2591 times faster than the MC method. On larger circuits, the identification rate and stability are 6.07 times and 13.55 times greater than the reference method and rand method, respectively.
As the cardinality of multiprocessor systems grows, the probability of arising malfunctioning or failing processors in the system is bound to increase. It is then of both practical and theoretical ...importance to know the reliability of the system as a whole. One metric for a system's overall reliability is the measurement of the collective effect of its subsystems becoming faulty. However, a challenge of this approach is that the subsystems often interact with each other in a complex manner, making the analysis difficult. Wu and Latifi (Int. Sci., vol. 178, pp. 2337-2348, Oct. 2008) proposed two schemes to evaluate the system reliability of the Star graph network under a probabilistic fault model. The first scheme computes the combinatorial probability of subgraphs to obtain an upper-bound on the reliability by considering the intersection of no more than three subgraphs. The second scheme computes an approximate combinatorial probability by completely neglecting the intersection among subgraphs. Recently, Lin et al. have applied this approach to investigate the reliability of the multiprocessor system based on the arrangement graph (IEEE Trans. Rel., vol. 62, no. 2, pp. 807-818, Jun. 2015). In this paper, we extend the above approach by computing both upper- and lower-bounds and considering the difference of the two, to establish the reliability of the (n, k) -Star graph, another extensively studied interconnection network for multiprocessor systems. More specifically, we compute a lower-bound and an upper-bound on the reliability by taking into account the intersection of no more than four or three subgraphs, respectively. The empirical study shows that the upper- and lower-bounds are both very close to the approximate results. Especially, the lower the single-node reliability goes, the closer the approximate reliability is to both lower- and upper-bounds.