Pump sizing is the process of dimensional matching of an impeller and stator to provide a satisfactory performance test result and good service life during the operation of progressive cavity pumps. ...In this process, historical data analysis and dimensional monitoring are done manually, consuming a large number of man-hours and requiring a deep knowledge of progressive cavity pump behavior. This paper proposes the use of graph neural networks in the construction of a prototype to recommend interference during the pump sizing process in a progressive cavity pump. For this, data from different applications is used in addition to individual control spreadsheets to build the database used in the prototype. From the pre-processed data, complex network techniques and the betweenness centrality metric are used to calculate the degree of importance of each order confirmation, as well as to calculate the dimensionality of the rotors. Using the proposed method a mean squared error of 0.28 is obtained for the cases where there are recommendations for order confirmations. Based on the results achieved, it is noticeable that there is a similarity of the dimensions defined by the project engineers during the pump sizing process, and this outcome can be used to validate the new design definitions.
This paper presents a novel automated road damage detection approach using Unmanned Aerial Vehicle (UAV) images and deep learning techniques. Maintaining road infrastructure is critical for ensuring ...a safe and sustainable transportation system. However, the manual collection of road damage data can be labor-intensive and unsafe for humans. Therefore, we propose using UAVs and Artificial Intelligence (AI) technologies to improve road damage detection's efficiency and accuracy significantly. Our proposed approach utilizes three algorithms, YOLOv4, YOLOv5, and YOLOv7, for object detection and localization in UAV images. We trained and tested these algorithms using a combination of the RDD2022 dataset from China and a Spanish road dataset. The experimental results demonstrate that our approach is efficient and achieves 59.9% mean average precision mAP@.5 for the YOLOv5 version, 73.20% mAP@.5 for the YOLOv7 version, and 65.70% mAP@.5 for a YOLOv5 model with a Transformer Prediction Head. These results demonstrate the potential of using UAVs and deep learning for automated road damage detection and pave the way for future research in this field.
Smart grid systems have become popular and necessary for the development of a sustainable power grid. These systems use different technologies to provide optimized services to the users of the ...network. Regarding computing, these systems optimize electrical services by processing a large amount of the data generated. However, privacy and security are essential in this kind of system. With a large amount of data generated, it is necessary to protect the privacy of users, because this data may reveal the users' personal information. Today, blockchain technology has proven to be an efficient architecture for solving privacy and security problems in different scenarios. Over the years, different blockchain platforms have emerged, attempting to solve specific problems in different areas. However, the use of different platforms fragmented the market, which was no different in the smart grid scenario. This work proposes a blockchain architecture that uses sidechains to make the system scalable and adaptable. We used three blockchains to ensure privacy, security, and trust in the system. To universalize the proposed solution, we used the Open Smart Grid Protocol and smart contracts. The results show that architecture security and privacy are guaranteed, making it feasible for implementation in real systems; although scalability issues regarding the storage of the data generated still exist.
The increase in life expectancy, according to the World Health Organization, is a fact, and with it rises the incidence of age-related neurodegenerative diseases. The most recurrent symptoms are ...those associated with tremors resulting from Parkinson’s disease (PD) or essential tremors (ETs). The main alternatives for the treatment of these patients are medication and surgical intervention, which sometimes have restrictions and side effects. Through computer simulations in Matlab software, this work investigates the performance of adaptive algorithms based on least mean squares (LMS) to suppress tremors in upper limbs, especially in the hands. The signals resulting from pathological hand tremors, related to PD, present components at frequencies that vary between 3 Hz and 6 Hz, with the more significant energy present in the fundamental and second harmonics, while physiological hand tremors, referred to ET, vary between 4 Hz and 12 Hz. We simulated and used these signals as reference signals in adaptive algorithms, filtered-x least mean square (Fx-LMS), filtered-x normalized least mean square (Fx-NLMS), and a hybrid Fx-LMS–NLMS purpose. Our results showed that the vibration control provided by the Fx-LMS–LMS algorithm is the most suitable for physiological tremors. For pathological tremors, we used a proposed algorithm with a filtered sinusoidal input signal, Fsinx-LMS, which presented the best results in this specific case.
The evolution of computing devices and ubiquitous computing has led to the development of the Internet of Things (IoT). Smart Grids (SGs) stand out among the many applications of IoT and comprise ...several embedded intelligent technologies to improve the reliability and the safety of power grids. SGs use communication protocols for information exchange, such as the Open Smart Grid Protocol (OSGP). However, OSGP does not support the integration with devices compliant with the Constrained Application Protocol (CoAP), a communication protocol used in conventional IoT systems. In this sense, this article presents an efficient software interface that provides integration between OSGP and CoAP. The results obtained demonstrate the effectiveness of the proposed solution, which presents low communication overhead and enables the integration between IoT and SG systems.
Nowadays, there are many fragmented records of patient's health data in different locations like hospitals, clinics, and organizations all around the world. With the arrival of the COVID-19 pandemic, ...several governments and institutions struggled to have satisfactory, fast, and accurate decision-making in a wide, dispersed, and global environment. In the current literature, we found that the most common related challenges include delay (network latency), software scalability, health data privacy, and global patient identification. We propose to design, implement and evaluate a healthcare software architecture focused on a global vaccination strategy, considering healthcare privacy issues, latency mitigation, support of scalability, and the use of a global identification. We have designed and implemented a prototype of a healthcare software called Fog-Care, evaluating performance metrics like latency, throughput and send rate of a hypothetical scenario where a global integrated vaccination campaign is adopted in wide dispensed locations (Brazil, USA, and United Kingdom), with an approach based on blockchain, unique identity, and fog computing technologies. The evaluation results demonstrate that the minimum latency spends less than 1 second to run, and the average of this metric grows in a linear progression, showing that a decentralized infrastructure integrating blockchain, global unique identification, and fog computing are feasible to make a scalable solution for a global vaccination campaign within other hospitals, clinics, and research institutions around the world and its data-sharing issues of privacy, and identification.
Machine Learning (ML) algorithms process input data making it possible to recognize and extract patterns from a large data volume. Likewise, Internet of Things (IoT) devices provide knowledge in a ...Federated Learning (FL) environment, sharing parameters without compromising their raw data. However, FL suffers with non-independent and identically distributed (non-iid) data, which means it is heterogeneos data and has biased input data, such as in smartphone data sources. This bias causes low convergence for ML algorithms, high energy and bandwidth consumption. This work proposes a method that mitigates non-iid data through a FedAvg-BE algorithm that provides Federated Learning with the border entropy evaluation to select good input from a non-iid data environment. Extensive experiments were performed using publicly available datasets to train deep neural networks. The experiment result evaluation demonstrates that execution time saves up to 22% for the MNIST dataset and 26% for the CIFAR-10 dataset, with the proposed model in Federated Learning settings. The results demonstrate the feasibility of the proposed model to mitigate non-iid data impact.
The usage of drones is increasingly spreading into new fields of application, ranging from agriculture to security. One of these new applications is sound recording in areas of difficult access. The ...challenge that arises when using drones for this purpose is that the sound of the recorded sources must be separated from the noise produced by the drone. The intensity of the noise emitted by the drone depends on several factors such as engine power, propeller rotation speed, or propeller type. Noise reduction is thus one of the greatest challenges for the next generations of unmanned aerial vehicles (UAVs) and unmanned aerial systems (UAS). Even though some advances have been made on that matter, drones still produce a considerable noise. In this article, we approach the problem of removing drone noise from single-channel audio recordings using blind source separation (BSS) techniques, and in particular, the singular spectrum analysis algorithm (SSA). Furthermore, we propose an optimization of this algorithm with a spatial complexity of <inline-formula> <tex-math notation="LaTeX">\mathcal {O}(nt) </tex-math></inline-formula>, which is significantly lower than the naive implementation which has a spatial complexity of <inline-formula> <tex-math notation="LaTeX">O(tk^{2}) </tex-math></inline-formula> (where <inline-formula> <tex-math notation="LaTeX">n </tex-math></inline-formula> is the number of sounds to be recovered, <inline-formula> <tex-math notation="LaTeX">t </tex-math></inline-formula> is the signal length and <inline-formula> <tex-math notation="LaTeX">k </tex-math></inline-formula> is the window size). The best value for each parameter (window length and number of components used to reconstruct the source) is selected by testing a wide range of values on different noise-sound ratios. Our system can greatly reduce the noise produced by the drone on said recordings. On average, after the recording has been processed by our method, the noise is reduced by 1.41 decibels.
In the current era of social media, the proliferation of images sourced from unreliable origins underscores the pressing need for robust methods to detect forged content, particularly amidst the ...rapid evolution of image manipulation technologies. Existing literature delineates two primary approaches to image manipulation detection: active and passive. Active techniques intervene preemptively, embedding structures into images to facilitate subsequent authenticity verification, whereas passive methods analyze image content for traces of manipulation. This study presents a novel solution to image manipulation detection by leveraging a multi-stream neural network architecture. Our approach harnesses three convolutional neural networks (CNNs) operating on distinct data streams extracted from the original image. We have developed a solution based on two passive detection methodologies. The system utilizes two separate streams to extract specific data subsets, while a third stream processes the unaltered image. Each net independently processes its respective data stream, capturing diverse facets of the image. The outputs from these nets are then fused through concatenation to ascertain whether the image has undergone manipulation, yielding a comprehensive detection framework surpassing the efficacy of its constituent methods. Our work introduces a unique dataset derived from the fusion of four publicly available datasets, featuring organically manipulated images that closely resemble real-world scenarios. This dataset offers a more authentic representation than other state-of-the-art methods that use algorithmically generated datasets based on image patches. By encompassing genuine manipulation scenarios, our dataset enhances the model’s ability to generalize across varied manipulation techniques, thereby improving its performance in real-world settings. After training, the merged approach obtained an accuracy of 89.59% in the set of validation images, significantly higher than the model trained with only unaltered images, which obtained 78.64%, and the two other models trained using images with a feature selection method applied to enhance inconsistencies that obtained 68.02% for Error-Level Analysis images and 50.70% for the method using Discrete Wavelet Transform. Moreover, our proposed approach exhibits reduced accuracy variance compared to alternative models, underscoring its stability and robustness across diverse datasets. The approach outlined in this work needs to provide information about the specific location or type of tempering, which limits its practical applications.
Since the early 2000s, life in cities has changed significantly due to the Internet of Things (IoT). This concept enables developers to integrate different devices collecting, storing, and processing ...a large amount of data, enabling new services to improve various professional and personal activities. However, privacy issues arise with a large amount of data generated, and solutions based on blockchain technology and smart contract have been developed to address these issues. Nevertheless, several issues must still be taken into account when developing blockchain architectures aimed at the IoT scenario because security flaws still exist in smart contracts, mainly due to the lack of ease when building the code. This article presents a blockchain storage architecture focused on license plate recognition (LPR) systems for smart cities focusing on privacy, performance, and security. The proposed architecture relies on the Ethereum platform. Each smart contract matches the privacy preferences of a license plate to be anonymized through public encryption. The storage of data captured by the LPR system can only be done if the smart contract enables it. However, in the case of motivation foreseen by the legislation, a competent user can change the smart contract and enable the storage of the data captured by the LPR system. Experimental results show that the performance of the proposed architecture is satisfactory, regarding the scalability of the built private network. Furthermore, tests on our smart contract using security and structure analysis tools on the developed script demonstrate that our solution is fraud-proof. The results obtained in all experiments bring evidence that our architecture is feasible to be used in real scenarios.