Operators often make different control decisions for different operating modes to meet the production requirement of the iron ore sintering process. Recognizing the operating modes is important to ...improve the quality and quantity of the sinter ore. An operating mode recognition method based on the clustering of time series data for the iron ore sintering process is presented in this paper. First, the Spearman rank correlation analysis and the information entropy analysis are combined to select parameters. Next, the operating mode recognition submodel is built by the fuzzy C-Means clustering method based on dynamic time warping distance and the naive Bayesian classifier method. Then, the outputs of the submodels are fused to obtain the final recognized operating mode. Finally, the productivity and combustion efficiency are regarded as the classification criteria, and the raw data collected from an iron and steel plant are used for the experiment. The experimental results show that the proposed method can effectively recognize the operating mode of the sintering process.
•An operating mode recognition method for the sintering process is presented.•The Fuzzy C-Means clustering based on the dynamic time warping distance is used.•The naive Bayesian classification method is used to identify the operating mode.
A general assumption in group decision making scenarios is that of all individuals possess accurate knowledge of the entire problem under study, including the abilities to make a distinction of the ...degree up to which an alternative is better than other one. However, in many real world scenarios, this may be unrealistic, particularly those involving numerous individuals and options to choose from conflicting and dynamics information sources. To manage such a situation, estimation methods of incomplete information, which use own assessments provided by the individuals and consistency criteria to avoid discrepancy, have been widely employed under fuzzy preference relations. In this study, we introduce the information granularity concept to estimate missing values supporting the objective of obtaining complete fuzzy preference relations with higher consistency levels. We use the concept of granular preference relations to form each missing value as a granule of information in place of a crisp number. This offers the flexibility that is required to estimate the missing information so that the consistency levels related to the complete fuzzy preference relations are as higher as possible.
•A granular procedure for estimating missing information in fuzzy preference relations is proposed.•The missing values or a fuzzy preference relations are assumed to be granular instead of numeric.•Information granularity is used here to estimate missing information.•The consistency levels related to the complete fuzzy preference relations are as higher as possible.
Sinter ore is the main raw material for ironmaking, and burn-through point (BTP) is one of the significant factors to measure the stability of the sintering process. In this article, through the ...feature extraction of time-series trend, a fuzzy control strategy is presented for the BTP. First, the Hurst exponent of the time series for the BTP is calculated by resorting to the rescaled range analysis method, by which the trend feature is analyzed. Then, by using the Mann-Kendall test, both global and local trend feature variable of the time series for the BTP are extracted and regarded as the inputs of the fuzzy controller. Next, a fuzzy controller for the BTP is designed to produce the control quantity of the strand velocity. Finally, based on a semiphysical simulation system and the raw data collected from an iron and steel plant, an experiment is carried out to demonstrate the effectiveness of the proposed control strategy.
The vehicle routing problem (VRP) is a typical discrete combinatorial optimization problem, and many models and algorithms have been proposed to solve the VRP and its variants. Although existing ...approaches have contributed significantly to the development of this field, these approaches either are limited in problem size or need manual intervention in choosing parameters. To solve these difficulties, many studies have considered learning-based optimization (LBO) algorithms to solve the VRP. This paper reviews recent advances in this field and divides relevant approaches into end-to-end approaches and step-by-step approaches. We performed a statistical analysis of the reviewed articles from various aspects and designed three experiments to evaluate the performance of four representative LBO algorithms. Finally, we conclude the applicable types of problems for different LBO algorithms and suggest directions in which researchers can improve LBO algorithms.
With the development of smart devices, the Internet of Things (IoT) has found wide applications and extended various services of Internet. On intermediate nodes and edge nodes of the IoT network, the ...aggregation primitive is a basic function for forwarding data, which takes data sources of other nodes as input. To protect sensitive information of source nodes while enabling the aggregation computations, some works presented corresponding secure protocols. However, the aggregation node could return invalid results due to transition failures, software bugs, or computation delays. Thus, how to verify the results is a big challenge of secure aggregation protocols. In this paper, we focus on the verification problem, and propose a new publicly verifiable scheme for the aggregation operation. Different from existing solutions, our scheme enables a public verifier to test an aggregation result on the data of source nodes while protecting the data privacy. Security analysis shows that the proposed scheme can achieve the security properties. Finally, we provide the experimental evaluation that demonstrates the effectiveness of our scheme.
Modern vehicles are equipped with various driver-assistance systems, including automatic lane keeping, which prevents unintended lane departures. Traditional lane detection methods incorporate ...handcrafted or deep learning-based features followed by postprocessing techniques for lane extraction using frame-based RGB cameras. The utilization of frame-based RGB cameras for lane detection tasks is prone to illumination variations, sun glare, and motion blur, which limits the performance of lane detection methods. Incorporating an event camera for lane detection tasks in the perception stack of autonomous driving is one of the most promising solutions for mitigating challenges encountered by frame-based RGB cameras. The main contribution of this work is the design of the lane marking detection model, which employs the dynamic vision sensor. This paper explores the novel application of lane marking detection using an event camera by designing a convolutional encoder followed by the attention-guided decoder. The spatial resolution of the encoded features is retained by a dense atrous spatial pyramid pooling (ASPP) block. The additive attention mechanism in the decoder improves performance for high dimensional input encoded features that promote lane localization and relieve postprocessing computation. The efficacy of the proposed work is evaluated using the DVS dataset for lane extraction (DET). The experimental results show a significant improvement of 5.54% and 5.03% in <inline-formula> <tex-math notation="LaTeX">F1 </tex-math></inline-formula> scores in multiclass and binary-class lane marking detection tasks. Additionally, the intersection over union (<inline-formula> <tex-math notation="LaTeX">IoU </tex-math></inline-formula>) scores of the proposed method surpass those of the best-performing state-of-the-art method by 6.50% and 9.37% in multiclass and binary-class tasks, respectively.
Task prioritization is one of the most researched areas in software development. Given the huge number of papers written on the topic, it might be challenging for IT practitioners-software ...developers, and IT project managers-to find the most appropriate tools or methods developed to date to deal with this important issue. The main goal of this work is therefore to review the current state of research and practice on task prioritization in the Software Engineering domain and to individuate the most effective ranking tools and techniques used in the industry. For this purpose, we conducted a systematic literature review guided and inspired by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses, otherwise known as the PRISMA statement. Based on our analysis, we can make a number of important observations for the field. Firstly, we found that most of the task prioritization approaches developed to date involve a specific type of prioritization strategy-bug prioritization. Secondly, the most recent works we review investigate task prioritization in terms of "pull request prioritization" and "issue prioritization," (and we speculate that the number of such works will significantly increase due to the explosion of version control and issue management software systems). Thirdly, we remark that the most frequently used metrics for measuring the quality of a prioritization model are f-score, precision, recall, and accuracy.
In this paper, we introduce a new research problem termed online feature transformation learning in the context of multiclass object category recognition. The learning of a feature transformation is ...viewed as learning a global similarity metric function in an online manner. We first consider the problem of online learning a feature transformation matrix expressed in the original feature space and propose an online passive aggressive feature transformation algorithm. Then these original features are mapped to kernel space and an online single kernel feature transformation (OSKFT) algorithm is developed to learn a nonlinear feature transformation. Based on the OSKFT and the existing Hedge algorithm, a novel online multiple kernel feature transformation algorithm is also proposed, which can further improve the performance of online feature transformation learning in large-scale application. The classifier is trained with k nearest neighbor algorithm together with the learned similarity metric function. Finally, we experimentally examined the effect of setting different parameter values in the proposed algorithms and evaluate the model performance on several multiclass object recognition data sets. The experimental results demonstrate the validity and good performance of our methods on cross-domain and multiclass object recognition application.
We develop a comprehensive and original methodology of data compression realized in the setting of granular computing. It is advocated that a compression process is inherently associated with the ...emergence of information granules forming compressed data. This entails that compression goes hand-in-hand with the elevated level of abstraction of the generated results. The performance of the method is evaluated with the aid of the indexes of coverage and specificity commonly encountered when processing and describing information granules. A two-phase design environment is systematically established along with the detailed algorithmic layer exploring mechanisms of fuzzy clustering and the principle of justifiable granularity and its generalizations. Reconstruction error and granular reconstruction error criteria are introduced and analyzed. Experimental studies carried out on publicly available data are reported and illustrate the process of granular compression and analyze the performance of the obtained results.
The book shows how the various paradigms of computational intelligence, employed either singly or in combination, can produce an effective structure for obtaining often vital information from ECG ...signals. The text is self-contained, addressing concepts, methodology, algorithms, and case studies and applications, providing the reader with the necessary background augmented with step-by-step explanation of the more advanced concepts. It is structured in three parts: Part I covers the fundamental ideas of computational intelligence together with the relevant principles of data acquisition, morphology and use in diagnosis; Part II deals with techniques and models of computational intelligence that are suitable for signal processing; and Part III details ECG system-diagnostic interpretation and knowledge acquisition architectures. Illustrative material includes: brief numerical experiments; detailed schemes, exercises and more advanced problems.