The purpose of this paper is to improve the accuracy of solving prediction tasks of the missing IoT data recovery. To achieve this, the authors have developed a new ensemble of neural network tools. ...It consists of two successive General Regression Neural Network (GRNN) networks and one neural-like structure of the Successive Geometric Transformation Model (SGTM). The principle of ensemble topology construction on two successively connected general regression neural networks, supplemented with an SGTM neural-like structure, is mathematically substantiated, which improves the accuracy of prediction results. The effectiveness of the method is based on the replacement of the summation of the results of the two GRNNs with a weighted summation, which improves the accuracy of the ensemble operation in general. A detailed algorithmic implementation of the ensemble method as well as a flowchart of its operation is presented. The parameters of the ensemble operation are determined by optimization using the brute-force method. Based on the developed ensemble method, the solution of the task of completing the partially missing values in the real monitoring dataset of the air environment collected by the IoT device is presented. By comparing the performance of the developed ensemble with the existing methods, the highest accuracy of its performance (by the parameters of Mean Absolute Percentage Error (MAPE) and Root Mean Squared Error (RMSE) accuracy) among the most similar in this class has been proved.
In the process of the "smart" house systems work, there is a need to process fuzzy input data. The models based on the artificial neural networks are used to process fuzzy input data from the ...sensors. However, each artificial neural network has a certain advantage and, with a different accuracy, allows one to process different types of data and generate control signals. To solve this problem, a method of choosing the optimal type of artificial neural network has been proposed. It is based on solving an optimization problem, where the optimization criterion is an error of a certain type of artificial neural network determined to control the corresponding subsystem of a "smart" house. In the process of learning different types of artificial neural networks, the same historical input data are used. The research presents the dependencies between the types of neural networks, the number of inner layers of the artificial neural network, the number of neurons on each inner layer, the error of the settings parameters calculation of the relative expected results.
Digitization of the energy industry is the key to a successful energy transition. To this end, all consumers and generators should be able to communicate permanently with each other so that the ...energy system as a whole functions safely and efficiently. Smart meter technology can make a contribution to this. Unfortunately, the rollout selected in Germany initially affects only about 11% of all consumers. The objective of this paper is therefore to determine the current status of this technology in companies and to pursue the research question of which factors influence acceptance and use. For this purpose, an extensive literature search with more than 50 keywords was conducted in scientific databases. After reviewing and cleaning the literature, 47 papers were selected for the literature review and considered in detail. The literature review was conducted using eight evaluation criteria: Origin and year of publication, identification of trends with Big Data and AI (artificial intelligence), type of organization, type of data, collection method, number of participants, type of data collection, and analysis method. In order to evaluate the main statements and results of the considered works, we also performed a Strengths–Weaknesses–Opportunities–Threats Analysis (SWOT).
Our analysis showed that: (1) The studies only address households as end-users, no companies are considered as end-users in relation to smart meter technology. (2) Technical aspects and barriers were often chosen as research focus and content, and secondary data were mostly used. (3) Studies examining soft factors such as acceptance criteria in general and for decision making are rare and also focused purely on residential customers. (4) Of the studies that collected primary data as part of their research, 71% used the survey method of a questionnaire survey. Further research should investigate company acceptance criteria, as this can increase implementation and make better predictions about the technology.
An appearance of radiometers and dosimeters on free sale made it possible to provide better radiation safety for citizens. The effects of radiation may not appear all at once. They can manifest ...themselves in decades to come in future generations, in the form of cancer, genetic mutations, etc. For this reason, we have developed in this paper a microcontroller-based radiation monitoring system. The system determines an accumulated radiation dose for a certain period, as well as gives alarm signals when the rate of the equivalent dose exceeds. The high reliability of this system is ensured by a rapid response to emergency situations: excess of the allowable power of the equivalent radiation dose and the accumulator charge control. Further, we have composed a microcontroller electronic circuit for the monitoring radiation system. Additionally, an operation algorithm, as well as software for the ATmega328P microcontroller of the Arduino Uno board, have been developed.
With time, textual data is proliferating, primarily through the publications of articles. With this rapid increase in textual data, anonymous content is also increasing. Researchers are searching for ...alternative strategies to identify the author of an unknown text. There is a need to develop a system to identify the actual author of unknown texts based on a given set of writing samples. This study presents a novel approach based on ensemble learning, DistilBERT, and conventional machine learning techniques for authorship identification. The proposed approach extracts the valuable characteristics of the author using a count vectorizer and bi-gram Term frequency-inverse document frequency (TF-IDF). An extensive and detailed dataset, "All the news" is used in this study for experimentation. The dataset is divided into three subsets (article1, article2, and article3). We limit the scope of the dataset and selected ten authors in the first scope and 20 authors in the second scope for experimentation. The experimental results of proposed ensemble learning and DistilBERT provide better performance for all the three subsets of the "All the news" dataset. In the first scope, the experimental results prove that the proposed ensemble learning approach from 10 authors provides a better accuracy gain of 3.14% and from DistilBERT 2.44% from the article1 dataset. Similarly, in the second scope from 20 authors, the proposed ensemble learning approach provides a better accuracy gain of 5.25% and from DistilBERT 7.17% from the article1 dataset, which is better than previous state-of-the-art studies.
With time, numerous online communication platforms have emerged that allow people to express themselves, increasing the dissemination of toxic languages, such as racism, sexual harassment, and other ...negative behaviors that are not accepted in polite society. As a result, toxic language identification in online communication has emerged as a critical application of natural language processing. Numerous academic and industrial researchers have recently researched toxic language identification using machine learning algorithms. However, Nontoxic comments, including particular identification descriptors, such as Muslim, Jewish, White, and Black, were assigned unrealistically high toxicity ratings in several machine learning models. This research analyzes and compares modern deep learning algorithms for multilabel toxic comments classification. We explore two scenarios: the first is a multilabel classification of Religious toxic comments, and the second is a multilabel classification of race or toxic ethnicity comments with various word embeddings (GloVe, Word2vec, and FastText) without word embeddings using an ordinary embedding layer. Experiments show that the CNN model produced the best results for classifying multilabel toxic comments in both scenarios. We compared the outcomes of these modern deep learning model performances in terms of multilabel evaluation metrics.
The problem of determining the position of the lidar with optimal accuracy is relevant in various fields of application. This is an important task of robotics that is widely used as a model when ...planning the route of vehicles, flight control systems, navigation systems, machine learning, and managing economic efficiency, a study of land degradation processes, planning and control of agricultural production stages, land inventory to evaluations of the consequences of various environmental impacts. The paper provides a detailed analysis of the proposed parallelization algorithm for solving the problem of determining the current position of the lidar. To optimize the computing process in order to accelerate and have the possibility of obtaining a real-time result, the OpenMP parallel computing technology is used. It is also possible to significantly reduce the computational complexity of the successive variant. A number of numerical experiments on the multi-core architecture of modern computers have been carried out. As a result, it was possible to accelerate the computing process about eight times and achieve an efficiency of 0.97. It is shown that a special difference in time of execution of a sequential and parallel algorithm manages to increase the number of measurements of lidar and iterations, which is relevant in simulating various problems of robotics. The obtained results can be substantially improved by selecting a computing system where the number of cores is more than eight. The main areas of application of the developed method are described, its shortcomings and prospects for further research are provided.
Coughing analysis stays a region that has gotten meager consideration from AI scientists. This can be credited to a few factors, for example, wasteful auxiliary frameworks, high costs in getting ...databases, or trouble in building classifiers. The current paper classifies and audits the advancement on coughing sound investigation, AI models, and the information assortment strategies through IoT (Internet of Things) for the grouping of pulmonary sicknesses. Moreover, it proposes a Multi-layered Convolutional Neural Network (Deep Convolutional Neural Network-DCNN) for the arrangement of eight pneumonic infections. The DCNN utilizes otherworldly highlights, cepstral coefficients, chroma highlights, and spectrograms from coughing sound for preparing. To test the viability of the model, a similar report with four standard models was directed on a database of 112 patients gathered from a pediatric office in India through a cloud server and wearable electronic sensors. Results demonstrated that the proposed model accomplished an accuracy of 0.4 on the test segment, which was practically equivalent to recent models proposed in the writing overviewed.
Abstract
In the vast majority of cases, the braking process is used to prevent traffic accidents. The effectiveness of this process depends on the design and functionality of vehicle braking systems ...(presence of anti-lock braking system, emergency braking system, preventive safety systems, etc.) and is limited by the amount of frictional forces in contact of tires with the road. The improvement of methodical approaches to evaluating the effectiveness of braking of cars contributes to increasing the accuracy and objectivity of establishing the circumstances of the occurrence of emergency situations. The paper analyses existing methods of evaluating the braking parameters of vehicles (including those with an electric drive) and modern methods of evaluating electric vehicle braking parameters and conducting auto-technical investigations of traffic accidents, which relate to using different methodological approaches and digital technologies at all stages of expert research. In contrast to existing models, the proposed mathematical model for estimating the trajectory of two-axle cars during braking allows for considering various types of input parameter uncertainty, reducing the range of possible modeling errors by 39%. Comparing simulation results and experimental data showed that the average relative error is 4.58%, and the maximum error did not exceed 7.82%. The performed study of the stability of the electric vehicles' movement during emergency braking with the help of developed mathematical models in the Mathcad software environment reveals the content of the algorithm of a similar calculation in specialized computer programs of auto technical examination. Conducting such calculations is relevant in the analysis of real accident situations, where specific circumstances and features that cannot be considered during modeling in specialized software must be taken into account. Simultaneously, the probability of type I errors is reduced by 2–19%, and type II errors are reduced by 43–68%.
This paper proposes a modified architecture of the Long-Term Evolution (LTE) mobile network to provide services for the Internet of Things (IoT). This is achieved by allocating a narrow bandwidth and ...transferring the scheduling functions from the eNodeB base station to an NB-IoT controller. A method for allocating uplink and downlink resources of the LTE/NB-IoT hybrid technology is applied to ensure the Quality of Service (QoS) from end-to-end. This method considers scheduling traffic/resources on the NB-IoT controller, which allows eNodeB planning to remain unchanged. This paper also proposes a prioritization approach within the IoT traffic to provide End-to-End (E2E) QoS in the integrated LTE/NB-IoT network. Further, we develop "smart queue" management algorithms for the IoT traffic prioritization. To demonstrate the feasibility of our approach, we performed a number of experiments using simulations. We concluded that our proposed approach ensures high end-to-end QoS of the real-time traffic by reducing the average end-to-end transmission delay.