The use of photovoltaic systems for clean electrical energy has increased. However, due to their low efficiency, researchers have looked for ways to increase their effectiveness and improve their ...efficiency. The Maximum Power Point Tracking (MPPT) inverters allow us to maximize the extraction of as much energy as possible from PV panels, and they require algorithms to extract the Maximum Power Point (MPP). Several intelligent algorithms show acceptable performance; however, few consider using Artificial Neural Networks (ANN). These have the advantage of giving a fast and accurate tracking of the MPP. The controller effectiveness depends on the algorithm used in the hidden layer and how well the neural network has been trained. Articles over the last six years were studied. A review of different papers, reports, and other documents using ANN for MPPT control is presented. The algorithms are based on ANN or in a hybrid combination with FL or a metaheuristic algorithm. ANN MPPT algorithms deliver an average performance of 98% in uniform conditions, exhibit a faster convergence speed, and have fewer oscillations around the MPP, according to this research.
This paper introduces a novel background subtraction method that utilizes texture-level analysis based on the Gabor filter bank and statistical moments. The method addresses the challenge of ...accurately detecting moving objects that exhibit similar color intensity variability or texture to the surrounding environment, which conventional methods struggle to handle effectively. The proposed method accurately distinguishes between foreground and background objects by capturing different frequency components using the Gabor filter bank and quantifying the texture level through statistical moments. Extensive experimental evaluations use datasets featuring varying lighting conditions, uniform and non-uniform textures, shadows, and dynamic backgrounds. The performance of the proposed method is compared against other existing methods using metrics such as sensitivity, specificity, and false positive rate. The experimental results demonstrate that the proposed method outperforms other methods in accuracy and robustness. It effectively handles scenarios with complex backgrounds, lighting changes, and objects that exhibit similar texture or color intensity as the background. Our method retains object structure while minimizing false detections and noise. This paper provides valuable insights into computer vision and object detection, offering a promising solution for accurate foreground detection in various applications such as video surveillance and motion tracking.
Full text
Available for:
IZUM, KILJ, NUK, PILJ, PNG, SAZU, UL, UM, UPUK
Automatic sign language recognition is a challenging task in machine learning and computer vision. Most works have focused on recognizing sign language using hand gestures only. However, body motion ...and facial gestures play an essential role in sign language interaction. Taking this into account, we introduce an automatic sign language recognition system based on multiple gestures, including hands, body, and face. We used a depth camera (OAK-D) to obtain the 3D coordinates of the motions and recurrent neural networks for classification. We compare multiple model architectures based on recurrent networks such as Long Short-Term Memories (LSTM) and Gated Recurrent Units (GRU) and develop a noise-robust approach. For this work, we collected a dataset of 3000 samples from 30 different signs of the Mexican Sign Language (MSL) containing features coordinates from the face, body, and hands in 3D spatial coordinates. After extensive evaluation and ablation studies, our best model obtained an accuracy of 97% on clean test data and 90% on highly noisy data.
Artificial vision system applications have generated significant interest as they allow information to be obtained through one or several of the cameras that can be found in daily life in many ...places, such as parks, avenues, squares, houses, etc. When the aim is to obtain information from large areas, it can become complicated if it is necessary to track an object of interest, such as people or vehicles, due to the vision space that a single camera can cover; this opens the way to distributed zone monitoring systems made up of a set of cameras that aim to cover a larger area. Distributed zone monitoring systems add great versatility, becoming more complex in terms of the complexity of information analysis, communication, interoperability, and heterogeneity in the interpretation of information. In the literature, the development of distributed schemes has focused on representing data communication and sharing challenges. Currently, there are no specific criteria for information exchange and analysis in a distributed system; hence, different models and architectures have been proposed. In this work, the authors present a framework to provide homogeneity in a distributed monitoring system. The information is obtained from different cameras, where a global reference system is defined for generated trajectories, which are mapped independently of the model used to obtain the dynamics of the movement of people within the vision area of a distributed system, thus allowing for its use in works where there is a large amount of information from heterogeneous sources. Furthermore, we propose a novel similarity metric that allows for information queries from heterogeneous sources. Finally, to evaluate the proposed performance, the authors developed several distributed query applications in an augmented reality system based on realistic environments and historical data retrieval using a client–server model.
This paper introduces a tool for calibrating multiple Kinect V2 sensors. To achieve the calibration, at least three acquisitions are needed from each camera. The method uses the Kinect's coordinate ...mapping capabilities between the sensors to register data between camera, depth, and color spaces. The method uses a novel approach where it obtains multiple 3D points matches between adjacent sensors, and use them to estimating the camera parameters. Once the cameras are calibrated, the tool can perform point cloud fusion transforming all the 3D points to a single reference. We tested the system with a network of four Kinect V2 sensors and present calibration results. The tool is implemented in MATLAB using the Kinect for Windows SDK 2.0.
•A calibration tool for multiple Kinect V2 sensors is proposed.•The method relies on multiple 3D point matches between adjacent sensors.•The tool allows point cloud fusion and visualization.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
With the fact that new server technologies are coming to market, it is necessary to update or create new methodologies for data analysis and exploitation. Applied methodologies go from decision tree ...categorization to artificial neural networks (ANN) usage, which implement artificial intelligence (AI) for decision making. One of the least used strategies is drill-down analysis (DD), belonging to the decision trees subcategory, which because of not having AI resources has lost interest among researchers. However, its easy implementation makes it a suitable tool for database processing systems. This research has developed a systematic review to understand the prospective of DD analysis on scientific literature in order to establish a knowledge platform and establish if it is convenient to drive it to integration with superior methodologies, as it would be those based on ANN, and produce a better diagnosis in future works. A total of 80 scientific articles were reviewed from 1997 to 2023, showing a high frequency in 2021 and experimental as the predominant methodology. From a total of 100 problems solved, 42% were using the experimental methodology, 34% descriptive, 17% comparative, and just 7% post facto. We detected 14 unsolved problems, from which 50% fall in the experimental area. At the same time, by study type, methodologies included correlation studies, processes, decision trees, plain queries, granularity, and labeling. It was observed that just one work focuses on mathematics, which reduces new knowledge production expectations. Additionally, just one work manifested ANN usage.
This paper proposes a deep learning model based on an artificial neural network with a single hidden layer for predicting the diagnosis of multiple sclerosis. The hidden layer includes a ...regularization term that prevents overfitting and reduces the model complexity. The purposed learning model achieved higher prediction accuracy and lower loss than four conventional machine learning techniques. A dimensionality reduction method was used to select the most relevant features from 74 gene expression profiles for training the learning models. The analysis of variance test was performed to identify the statistical difference between the mean of the proposed model and the compared classifiers. The experimental results show the effectiveness of the proposed artificial neural network.
SEL, a State-based Language for Video Surveillance Modeling, is a formal language designed to represent and identify activities in surveillance systems through scenario semantics and the creation of ...motion primitives structured in programs. Motion primitives represent the temporal evolution of motion evidence. They are the most basic motion structures detected as motion evidence, including operators such as sequence, parallel, and concurrency, which indicate trajectory evolution, simultaneity, and synchronization. SEL is a very expressive language that characterizes interactions by describing the relationships between motion primitives. These interactions determine the scenario’s activity and meaning. An experimental model is constructed to demonstrate the value of SEL, incorporating challenging activities in surveillance systems. This approach assesses the language’s suitability for describing complicated tasks.
With new high-performance server technology in data centers and bunkers, optimizing search engines to process time and resource consumption efficiently is necessary. The database query system, upheld ...by the standard SQL language, has maintained the same functional design since the advent of PL/SQL. This situation is caused by recent research focused on computer resource management, encryption, and security rather than improving data mining based on AI tools, machine learning (ML), and artificial neural networks (ANNs). This work presents a projected methodology integrating a multilayer perceptron (MLP) with Kmeans. This methodology is compared with traditional PL/SQL tools and aims to improve the database response time while outlining future advantages for ML and Kmeans in data processing. We propose a new corollary: hk→H=SSE(C),wherek>0and∃X, executed on application software querying data collections with more than 306 thousand records. This study produced a comparative table between PL/SQL and MLP-Kmeans based on three hypotheses: line query, group query, and total query. The results show that line query increased to 9 ms, group query increased from 88 to 2460 ms, and total query from 13 to 279 ms. Testing one methodology against the other not only shows the incremental fatigue and time consumption that training brings to database query but also that the complexity of the use of a neural network is capable of producing more precision results than the simple use of PL/SQL instructions, and this will be more important in the future for domain-specific problems.
Interferon-beta is one of the most widely prescribed disease-modifying therapies for multiple sclerosis patients. However, this treatment is only partially effective, and a significant proportion of ...patients do not respond to this drug. This paper proposes an alternative fuzzy logic system, based on the opinion of a neurology expert, to classify relapsing–remitting multiple sclerosis patients as high, medium, or low responders to interferon-beta. Also, a pipeline prediction model trained with biomarkers associated with interferon-beta responses is proposed, for predicting whether patients are potential candidates to be treated with this drug, in order to avoid ineffective therapies. The classification results showed that the fuzzy system presented 100% efficiency, compared to an unsupervised hierarchical clustering method (52%). So, the performance of the prediction model was evaluated, and 0.8 testing accuracy was achieved. Hence, a pipeline model, including data standardization, data compression, and a learning algorithm, could be a useful tool for getting reliable predictions about responses to interferon-beta.