Objective: The present study proposes a new epileptic seizure prediction method through integrating heart rate variability (HRV) analysis and an anomaly monitoring technique. Methods: Because ...excessive neuronal activities in the preictal period of epilepsy affect the autonomic nervous systems and autonomic nervous function affects HRV, it is assumed that a seizure can be predicted through monitoring HRV. In the proposed method, eight HRV features are monitored for predicting seizures by using multivariate statistical process control, which is a well-known anomaly monitoring method. Results: We applied the proposed method to the clinical data collected from 14 patients. In the collected data, 8 patients had a total of 11 awakening preictal episodes and the total length of interictal episodes was about 57 h. The application results of the proposed method demonstrated that seizures in ten out of eleven awakening preictal episodes could be predicted prior to the seizure onset, that is, its sensitivity was 91%, and its false positive rate was about 0.7 times per hour. Conclusion: This study proposed a new HRV-based epileptic seizure prediction method, and the possibility of realizing an HRV-based epileptic seizure prediction system was shown. Significance: The proposed method can be used in daily life, because the heart rate can be measured easily by using a wearable sensor.
Objective: Driver drowsiness detection is a key technology that can prevent fatal car accidents caused by drowsy driving. The present work proposes a driver drowsiness detection algorithm based on ...heart rate variability (HRV) analysis and validates the proposed method by comparing with electroencephalography (EEG)-based sleep scoring. Methods: Changes in sleep condition affect the autonomic nervous system and then HRV, which is defined as an RR interval (RRI) fluctuation on an electrocardiogram trace. Eight HRV features are monitored for detecting changes in HRV by using multivariate statistical process control, which is a well known anomaly detection method. Result: The performance of the proposed algorithm was evaluated through an experiment using a driving simulator. In this experiment, RRI data were measured from 34 participants during driving, and their sleep onsets were determined based on the EEG data by a sleep specialist. The validation result of the experimental data with the EEG data showed that drowsiness was detected in 12 out of 13 pre-N1 episodes prior to the sleep onsets, and the false positive rate was 1.7 times per hour. Conclusion: The present work also demonstrates the usefulness of the framework of HRV-based anomaly detection that was originally proposed for epileptic seizure prediction. Significance: The proposed method can contribute to preventing accidents caused by drowsy driving.
•We developed a model for the joint optimization of process control and maintenance.•Information from the control charts is used to facilitate maintenance decisions.•Process control and maintenance ...procedures are inter-dependent.•Potential cost savings can be obtained from joint SPC and maintenance policies.
Statistical process control and maintenance planning have long been treated as two separate problems. The interdependence between these two activities has not been adequately addressed in the literature, despite their apparent connections. Information obtained in the course of statistical process control signals the need for possible maintenance actions, and thus, affects the preventive maintenance schedules. Preventive maintenance actions can prevent a production process from further deterioration and improve product quality in conjunction with statistical process control. This paper presents an integrated model for the joint optimization of statistical process control and preventive maintenance. The proposed model is developed for a production process that deteriorates according to a discrete-time Markov chain. It is assumed that preventive maintenance is imperfect, and both preventive and corrective maintenance are instantaneous. The formulation of the deterioration process with maintenance interventions, formulated as a Markov chain, provides a breakthrough in designing an efficient solution algorithm and obtaining analytical results. A numerical example is used to illustrate the proposed integrated statistical process control and preventive maintenance policies. Sensitivity analysis is conducted to analyze the impact of model parameters on optimal policies. Sensitivity analysis further indicates the interrelationship between statistical process control and maintenance actions. Numerical results indicate that potential cost savings can be achieved from the proposed integrated policies.
Neste trabalho, foi realizada uma extensa revisão bibliométrica sobre a produção científica de controle estatístico de processo, aplicada à indústria de transformação, mapeando as principais ...pesquisas da literatura, bem como os principais periódicos que estão publicando esta pesquisa. O controle estatístico de processo é uma das principais ferramentas que permitem aos gestores de produção determinar se os processos atendem aos requisitos pré-determinados pelos clientes, proporcionando melhor qualidade do produto e do processo. A análise ilustra a evolução da pesquisa nas últimas décadas, os principais periódicos para publicação, o nível de concentração ou fragmentação da comunidade científica e a densidade geográfica das colaborações em pesquisa. Por fim, também são apresentados os principais temas que têm sido abordados pela comunidade científica que debatem o CEP em aplicações de manufatura e pesquisas futuras para o direcionamento deste tema.
Recent advancements in data-driven process control and performance analysis could provide the wastewater treatment industry with an opportunity to reduce costs and improve operations. However, big ...data in wastewater treatment plants (WWTP) is widely underutilized, due in part to a workforce that lacks background knowledge of data science required to fully analyze the unique characteristics of WWTP. Wastewater treatment processes exhibit nonlinear, nonstationary, autocorrelated, and co-correlated behavior that (i) is very difficult to model using first principals and (ii) must be considered when implementing data-driven methods. This review provides an overview of data-driven methods of achieving fault detection, variable prediction, and advanced control of WWTP. We present how big data has been used in the context of WWTP, and much of the discussion can also be applied to water treatment. Due to the assumptions inherent in different data-driven modeling approaches (e.g., control charts, statistical process control, model predictive control, neural networks, transfer functions, fuzzy logic), not all methods are appropriate for every goal or every dataset. Practical guidance is given for matching a desired goal with a particular methodology along with considerations regarding the assumed data structure. References for further reading are provided, and an overall analysis framework is presented.
Display omitted
•Wastewater treatment produces nonstationary, autocorrelated, & co-correlated data.•A fundamental understanding of statistical process control is needed for facilities.•Method modifications are needed to account for the unique features of wastewater.•Neural networks can be limited by the quality of data produced by facilities.•Statistical process control is also not a silver bullet for wastewater treatment.
Prediction methods can be augmented by local explanation methods (LEMs) to perform root cause analysis for individual observations. But while most recent research on LEMs focus on low-dimensional ...problems, real-world datasets commonly have hundreds or thousands of variables. Here, we investigate how LEMs perform for high-dimensional industrial applications. Seven prediction methods (penalized logistic regression, LASSO, gradient boosting, random forest and support vector machines) and three LEMs (TreeExplainer, Kernel SHAP, and the conditional normal sampling importance (CNSI)) were combined into twelve explanation approaches. These approaches were used to compute explanations for simulated data, and real-world industrial data with simulated responses. The approaches were ranked by how well they predicted the contributions according to the true models. For the simulation experiment, the generalized linear methods provided best explanations, while gradient boosting with either TreeExplainer or CNSI, or random forest with CNSI were robust for all relationships. For the real-world experiment, TreeExplainer performed similarly, while the explanations from CNSI were significantly worse. The generalized linear models were fastest, followed by TreeExplainer, while CNSI and Kernel SHAP required several orders of magnitude more computation time. In conclusion, local explanations can be computed for high-dimensional data, but the choice of statistical tools is crucial.
•Simulated high-dimensional process-like datasets with binary quality variable.•Real-world process data with a simulated response.•Evaluated twelve prediction and explanation approaches using two metrics.•Generalized linear models perform well for monotone relationships.•Tree-based models are robust for multiple types of relationships.
Process monitoring of multivariate quality attributes is important in many industrial applications, in which rich historical data are often available thanks to modern sensing technologies. While ...multivariate statistical process control (SPC) has been receiving increasing attention, existing methods are often inadequate as they are sensitive to the parametric model assumptions of multivariate data. In this paper, we propose a novel, nonparametric k-nearest neighbours empirical cumulative sum (KNN-ECUSUM) control chart that is a machine-learning-based black-box control chart for monitoring multivariate data by utilising extensive historical data under both in-control and out-of-control scenarios. Our proposed method utilises the k-nearest neighbours (KNN) algorithm for dimension reduction to transform multivariate data into univariate data and then applies the CUSUM procedure to monitor the change on the empirical distribution of the transformed univariate data. Extensive simulation studies and a real industrial example based on a disk monitoring system demonstrate the robustness and effectiveness of our proposed method.
The CUSUM control chart is suitable for detecting small to moderate parameter shifts for processes involving autocorrelated data. The average run length (ARL) can be used to assess the ability of a ...CUSUM control chart to detect changes in a long-memory seasonal autoregressive fractionally integrated moving average with exogenous variable (SARFIMAX) process with underlying exponential white noise. Herein, new ARLs via an analytical integral equation (IE) solution as an analytical IE and a numerical IE method to test a CUSUM control chart's ability to detect a wide range of shifts in the mean of a SARFIMAX(P, D, Q, r).sub.s process with underlying exponential white noise are presented. The analytical IE formulas were derived by using the Fredholm integral equation of the second type while the numerical IE method for the approximate ARL is based on quadrature rules. After applying Banach's fixed-point theorem to guarantee its existence and uniqueness, the precision of the proposed analytical IE ARL was the same as the numerical IE method. The sensitivity and accuracy of the ARLs based on both methods were assessed on a CUSUM control chart running a SARFIMAX(P, D, Q, r).sub.s process with underlying exponential white noise. The results of an extensive numerical study comprising the examination of a wide variety of out-of-control situations and computational schemes reveal that none of the methods outperformed the IE. Specifically, the computational scheme is easier and can be completed in one step. Hence, it is recommended for use in this situation. An illustrative example based on real data is also provided, the results of which were found to be in accordance with the research results.
The CUSUM control chart is suitable for detecting small to moderate parameter shifts for processes involving autocorrelated data. The average run length (ARL) can be used to assess the ability of a ...CUSUM control chart to detect changes in a long-memory seasonal autoregressive fractionally integrated moving average with exogenous variable (SARFIMAX) process with underlying exponential white noise. Herein, new ARLs via an analytical integral equation (IE) solution as an analytical IE and a numerical IE method to test a CUSUM control chart's ability to detect a wide range of shifts in the mean of a SARFIMAX(P, D, Q, r).sub.s process with underlying exponential white noise are presented. The analytical IE formulas were derived by using the Fredholm integral equation of the second type while the numerical IE method for the approximate ARL is based on quadrature rules. After applying Banach's fixed-point theorem to guarantee its existence and uniqueness, the precision of the proposed analytical IE ARL was the same as the numerical IE method. The sensitivity and accuracy of the ARLs based on both methods were assessed on a CUSUM control chart running a SARFIMAX(P, D, Q, r).sub.s process with underlying exponential white noise. The results of an extensive numerical study comprising the examination of a wide variety of out-of-control situations and computational schemes reveal that none of the methods outperformed the IE. Specifically, the computational scheme is easier and can be completed in one step. Hence, it is recommended for use in this situation. An illustrative example based on real data is also provided, the results of which were found to be in accordance with the research results.
Monitoring wind turbine blade breakages based on supervisory control and data acquisition (SCADA) data is investigated in this research. A preliminary data analysis is performed to demonstrate that ...existing SCADA features are unable to present irregular patterns prior to occurrences of blade breakages. A deep autoencoder (DA) model is introduced to derive an indicator of impending blade breakages, the reconstruction error (RE), from SCADA data. The DA model is a neural network of multiple hidden layers organized symmetrically. In training DA models, the restricted Boltzmann machine is applied to initialize weights and biases. The back-propagation method is subsequently employed to further optimize the network structure. Through examining SCADA data, we observe that the trend of RE will shift by the blade breakage. To effectively detect RE shifts through online monitoring, the exponentially weighted moving average control chart is deployed. The effectiveness of the proposed monitoring approach is validated by blade breakage cases collected from wind farms located in China. The computational results prove the capability of the proposed monitoring approach in identifying impending blade breakages.