In the process of the "smart" house systems work, there is a need to process fuzzy input data. The models based on the artificial neural networks are used to process fuzzy input data from the ...sensors. However, each artificial neural network has a certain advantage and, with a different accuracy, allows one to process different types of data and generate control signals. To solve this problem, a method of choosing the optimal type of artificial neural network has been proposed. It is based on solving an optimization problem, where the optimization criterion is an error of a certain type of artificial neural network determined to control the corresponding subsystem of a "smart" house. In the process of learning different types of artificial neural networks, the same historical input data are used. The research presents the dependencies between the types of neural networks, the number of inner layers of the artificial neural network, the number of neurons on each inner layer, the error of the settings parameters calculation of the relative expected results.
An appearance of radiometers and dosimeters on free sale made it possible to provide better radiation safety for citizens. The effects of radiation may not appear all at once. They can manifest ...themselves in decades to come in future generations, in the form of cancer, genetic mutations, etc. For this reason, we have developed in this paper a microcontroller-based radiation monitoring system. The system determines an accumulated radiation dose for a certain period, as well as gives alarm signals when the rate of the equivalent dose exceeds. The high reliability of this system is ensured by a rapid response to emergency situations: excess of the allowable power of the equivalent radiation dose and the accumulator charge control. Further, we have composed a microcontroller electronic circuit for the monitoring radiation system. Additionally, an operation algorithm, as well as software for the ATmega328P microcontroller of the Arduino Uno board, have been developed.
In the early software development stages, the aim of estimation is to obtain a rough understanding of the timeline and resources required to implement a potential project. The current study is ...devoted to a method of preliminary estimation applicable at the beginning of the software development life cycle when the level of uncertainty is high. The authors’ concepts of the estimation life cycle, the estimable items breakdown structure, and a system of working-time balance equations in conjunction with an agile-fashioned sizing approach are used. To minimize the experts’ working time spent on preliminary estimation, the authors applied a decision support procedure based on integer programming and the analytic hierarchy process. The method’s outcomes are not definitive enough to make commitments; instead, they are supposed to be used for communication with project stakeholders or as inputs for the subsequent estimation stages. For practical usage of the preliminary estimation method, a semistructured business process is proposed.
Nowadays, intensive streams of fuzzy input data need to be processed in real-time for different fields of science and engineering. To solve this problem, a generalized model for the ...parallel-streaming neural element was developed in this paper. The proposed model allows minimizing hardware costs while providing scalar product and activation function calculations in real time. In particular, an algorithm and a structure for a parallel-streaming device (PSD) were developed to calculate a scalar product with the direct formation of partial products based on the analysis of a single bit-cut of multipliers, which provides working with the shortest conveyor stage. It is based on a modified Booth’s algorithm that allows reducing equipment costs for processing operands with high bit-width. Moreover, it promotes the lowest equipment costs for the operands with a low bit-width. Besides, researches demonstrate that the main way of increasing the speed of the developed algorithms and structures of PSD for scalar product calculating is a preliminary formation of partial products. Further, the estimation of the model parameters shows reducing conveyor steps, improvement of the locality of connections, and an increase of an adaptation to the coming data intensity. It is proposed to use the developed algorithms and structures as a basis for building devices for parallel-streaming calculation of the scalar product in real time with high efficiency of equipment use. The main ways of harmonizing the time of incoming data and weights with the conveyor cycle of the PSD for calculation of the scalar product are determined. A methodology proposed for building conveyor devices for parallel-streaming calculation of the scalar product in real time for a given intensity of input data ensures the implementation of devices with the required speed and with minimal hardware costs.
The precise categorization of brief texts holds significant importance in various applications within the ever-changing realm of artificial intelligence (AI) and natural language processing (NLP). ...Short texts are everywhere in the digital world, from social media updates to customer reviews and feedback. Nevertheless, short texts’ limited length and context pose unique challenges for accurate classification. This research article delves into the influence of data sorting methods on the quality of manual labeling in hierarchical classification, with a particular focus on short texts. The study is set against the backdrop of the increasing reliance on manual labeling in AI and NLP, highlighting its significance in the accuracy of hierarchical text classification. Methodologically, the study integrates AI, notably zero-shot learning, with human annotation processes to examine the efficacy of various data-sorting strategies. The results demonstrate how different sorting approaches impact the accuracy and consistency of manual labeling, a critical aspect of creating high-quality datasets for NLP applications. The study’s findings reveal a significant time efficiency improvement in terms of labeling, where ordered manual labeling required 760 min per 1000 samples, compared to 800 min for traditional manual labeling, illustrating the practical benefits of optimized data sorting strategies. Comparatively, ordered manual labeling achieved the highest mean accuracy rates across all hierarchical levels, with figures reaching up to 99% for segments, 95% for families, 92% for classes, and 90% for bricks, underscoring the efficiency of structured data sorting. It offers valuable insights and practical guidelines for improving labeling quality in hierarchical classification tasks, thereby advancing the precision of text analysis in AI-driven research. This abstract encapsulates the article’s background, methods, results, and conclusions, providing a comprehensive yet succinct study overview.
Estimation is an essential step of software development project planning that has a significant impact on project success—underestimation often leads to problems with the delivery or even causes ...project failure. An important aspect that the classical estimation methods are usually missing is the Agile nature of development processes in the implementation phase. The estimation method proposed in this article aims at software development projects implemented by Scrum teams with differentiated specializations. The method is based on the authors’ system of working-time balance equations and the approach to measuring project scope with time-based units—normalized development estimates. In order to reduce efforts spent on the estimation itself, an analysis of dependencies among project tasks is not mandatory. The outputs of the methods are not recommended to be treated as commitments; instead, they are supposed to be used to inform project stakeholders about the forecasted duration of a potential project. The method is simple enough to allow even an inexpensive spreadsheet-based implementation.
Control of a certain object can be implemented using different principles, namely, a certain software-implemented algorithm, fuzzy logic, neural networks,
. In recent years, the use of neural ...networks for applications in control systems has become increasingly popular. However, their implementation in embedded systems requires taking into account their limitations in performance, memory,
. In this article, a neuro-controller for the embedded control system is proposed, which enables the processing of input technological data. A structure for the neuro-controller is proposed, which is based on the modular principle. It ensures rapid improvement of the system during its development. The neuro-controller functioning algorithm and data processing model based on artificial neural networks are developed. The neuro-controller hardware is developed based on the STM32 microcontroller, sensors and actuators, which ensures a low cost of implementation. The artificial neural network is implemented in the form of a software module, which allows us to change the neuro-controller function quickly. As a usage example, we considered STM32-based implementation of the control system for an intelligent mini-greenhouse.
An approach to the implementation of a neural network for real-time cryptographic data protection with symmetric keys oriented on embedded systems is presented. This approach is valuable, especially ...for onboard communication systems in unmanned aerial vehicles (UAV), because of its suitability for hardware implementation. In this study, we evaluate the possibility of building such a system in hardware implementation at FPGA. Onboard implementation-oriented information technology of real-time neuro-like cryptographic data protection with symmetric keys (masking codes, neural network architecture, and matrix of weighting coefficients) has been developed. Due to the pre-calculation of matrices of weighting coefficients and tables of macro-partial products and the use of tabular-algorithmic implementation of neuro-like elements and dynamic change of keys, it provides increased cryptographic stability and hardware–software implementation on FPGA. The table-algorithmic method of calculating the scalar product has been improved. By bringing the weighting coefficients to the greatest common order, pre-computing the tables of macro-partial products, and using operations of memory read, fixed-point addition, and shift operations instead of floating-point multiplication and addition operations, it provides a reduction in hardware costs for its implementation and calculation time as well. Using a processor core supplemented with specialized hardware modules for calculating the scalar product, a system of neural network cryptographic data protection in real-time has been developed, which, due to the combination of universal and specialized approaches, software, and hardware, ensures the effective implementation of neuro-like algorithms for cryptographic encryption and decryption of data in real-time. The specialized hardware for neural network cryptographic data encryption was developed using VHDL for equipment programming in the Quartus II development environment ver. 13.1 and the appropriate libraries and implemented on the basis of the FPGA EP3C16F484C6 Cyclone III family, and it requires 3053 logic elements and 745 registers. The execution time of exclusively software realization of NN cryptographic data encryption procedure using a NanoPi Duo microcomputer based on the Allwinner Cortex-A7 H2+ SoC was about 20 ms. The hardware–software implementation of the encryption, taking into account the pre-calculations and settings, requires about 1 msec, including hardware encryption on the FPGA of four 2-bit inputs, which is performed in 160 nanoseconds.
The object of study of this research paper is the processes of changing the properties of three-dimensional surfaces of a user avatar in real time. In the course of this work, the research addressed ...the limitations of existing solutions for synthesizing three-dimensional user avatars, particularly in terms of realism and personalization on mobile devices. Furthermore, the study tackled the challenge of efficiently adjusting color attributes without compromising the underlying texture information, ultimately enhancing user experience across various applications such as gaming, virtual reality, and social media platforms. A method consisting of three key components is proposed: pre-designed 3D models, multi-layer texturing, and software and hardware implementation. The multilayer texturing approach includes different texture maps, such as diffuse and occlusion maps, which contributes to the smooth integration of texture attributes and the overall realism of 3D avatars. The real-time change of surface properties is achieved by mixing the diffusion map with other texture maps using the Metal hardware accelerator, allowing users to efficiently adjust the color attributes of their 3D avatars while preserving the underlying texture information. The paper presents a software algorithm that uses the SceneKit game engine and the Metal framework for rendering 3D avatars on iOS devices. The result of the developed method and tool is a mobile application for the iOS platform that allows users to modify a digital 3D avatar by changing the model's colors. The paper presents the results of testing the proposed methods, means and developed application and compares them with existing solutions in the industry. The developed method can be implemented in areas such as gaming, virtual reality, video conferencing, and social media platforms, offering greater personalization and a more immersive user experience.
Об'єктом дослідження є процес зміни властивостей тривимірних поверхонь аватара користувача в реальному часі. В ході роботи були розглянуті обмеження існуючих рішень для синтезу тривимірних користувацьких аватарів, зокрема, з точки зору реалістичності та персоналізації на мобільних пристроях. Крім того, дане дослідження спрямоване на вирішення проблеми ефективного налаштування колірних атрибутів аватара без втрати базової інформації про текстурні дані, що в кінцевому підсумку має на меті покращити користувацький досвід продуктових застосунків. В ході роботи проведено ретельний аналіз існуючих рішень для синтезу тривимірних користувацьких аватарів з метою виявлення обмежень та напрямків для вдосконалення. Запропоновано метод, що складається з трьох ключових компонентів: попередньо розроблені 3D-моделі, багатошаровго текстурування та програмно-апаратної реалізації. Багатошаровий підхід до текстурування включає різні текстурні карти, як-от дифузні та карти оклюзії, що сприяє плавній інтеграції атрибутів текстури та загальній реалістичності 3D-аватарів. Зміна властивостей поверхні в реальному часі досягається шляхом змішування карти дифузії з іншими текстурними картами за допомогою апаратного прискорювача Metal, що дозволяє користувачам ефективно налаштовувати колірні атрибути своїх 3D-аватарів, зберігаючи при цьому основну текстурну інформацію. Представлено алгоритм програмного забезпечення, що використовує ігровий рушій SceneKit та фреймворк Metal для рендерингу 3D-аватарів на пристроях iOS. Результатом розробленого методу та засобу є мобільний застосунок для платформи iOS, що дає змогу користувачам модифікувати цифровий 3D-аватар, змінюючи кольори моделі. В роботі представлені результати тестування запропонованих методів, засобів та розробленого застосунку, а також проведено порівняння з існуючими рішеннями в галузі. Розроблений метод може бути впроваджений в таких напрямках, як ігри, віртуальна реальність, відеоконференції та соціальні медіа-платформи, пропонуючи більшу персоналізацію та більш захоплюючий користувацький досвід.