This study aims to design an information system that can support research implementation of target costing system with Data Analysis Program, to design the Program Data analysis in research, analyse ...the effectiveness of the program to be used and evaluate the weakness and superiority of the program will come. In this study the authors use the questionnaire distribution tool Design Model Relational Data Target Costing System, for the purpose of research can be achieved more accurately. In designing the design of Target Costing questionnaire to the built system shows that the Target Costing questionnaire that has been designed is useful to provide an information system that can generate output (output) and input (entry, edit, inquiry, add record), based on the input query According to the required researcher needs related to the target costing. Access program as a tool to design, create tables, create forms and create reports. To display accurate results, then by designing the Target Costing questionnaire, can display data that produces graphs, reports, queries or filter data with Filter by Form to facilitate the search data in accordance with the needs of researchers. DOI: https://doi.org/10.26905/jtmi.v3i2.1323
DATA MINING IN RELATIONAL SYSTEMS Valentin Filatov; Valerii Semenets; Oleg Zolotukhin
Sučasnij stan naukovih doslìdženʹ ta tehnologìj v promislovostì (Online),
09/2020
3 (13)
Journal Article
Recenzirano
Odprti dostop
The subject of the research is methods of relational database mining. The purpose of the research is to develop scientifically grounded models for supporting intelligent technologies for integrating ...and managing information resources of distributed computing systems. Explore the features of the operational specification of the relational data model. To develop a method for evaluating a relational data model and a procedure for constructing functional associative rules when solving problems of mining relational databases. In accordance with the set research goal, the presented article considers the following tasks: analysis of existing methods and technologies for data mining. Research of methods for representing intelligent models by means of relational systems. Development of technology for evaluating the relational data model for building functional association rules in the tasks of mining relational databases. Development of design tools and maintenance of applied data mining tasks; development of applied problems of data mining. Results: The analysis of existing methods and technologies for data mining is carried out. The features of the structural specification of a relational database, the formation of association rules for building a decision support system are investigated. Information technology has been developed, a methodology for the design of information and analytical systems, based on the relational data model, for solving practical problems of mining, practical recommendations have been developed for the use of a relational data model for building functional association rules in problems of mining relational databases, conclusion: the main source of knowledge for database operation can be a relational database. In this regard, the study of data properties is an urgent task in the construction of systems of association rules. On the one hand, associative rules are close to logical models, which makes it possible to organize efficient inference procedures on them, and on the other hand, they more clearly reflect knowledge than classical models. They do not have the strict limitations typical of logical calculus, which makes it possible to change the interpretation of product elements. The search for association rules is far from a trivial task, as it might seem at first glance. One of the problems is the algorithmic complexity of finding frequently occurring itemsets, since as the number of items grows, the number of potential itemsets grows exponentially.
In the contemporary world, a large amount of heterogeneous data are accumulated, which have different nature and require specific approaches to their processing and storage. Even within one ...information system, it is often required to process data represented in different data models from the same knowledge domain. One way to solve this problem is multimodel databases, which simultaneously support several data models. These database management systems generally imply the division into “primary” and “secondary” data models, as well as require explicit mapping of data schemas. The relational data model appeared a long time ago; it is well studied and widely used. On the other hand, graph data models, which are suitable for social networks, recommender services, transport networks, etc., are become increasingly popular. In this paper, we propose algorithms for mapping relational and graph databases the composition of which is an identity mapping. These algorithms form a basis for creating multimodel graph-relational database management systems.
Data integration is one of the core responsibilities of EDM (enterprise data management) and interoperability. It is essential for almost every digitalization project, e.g., during the migration from ...a legacy ERP (enterprise resource planning) software to a new system. One challenge is the incompatibility of data models, i.e., different software systems use specific or proprietary terminology, data structures, data formats, and semantics. Data need to be interchanged between software systems, and often complex data conversions or transformations are necessary. This paper presents an approach that allows software engineers or data experts to use models and patterns in order to specify data integration: it is based on data models such as ER (entity-relationship) diagrams or UML (unified modeling language) class models that are well-accepted and widely used in practice. Predefined data integration patterns are combined (applied) on the model level leading to formal, precise, and concise definitions of data transformations and conversions. Data integration definitions can then be executed (via code generation) so that a manual implementation is not necessary. The advantages are that existing data models can be reused, standardized data integration patterns lead to fast results, and data integration specifications are executable and can be easily maintained and extended. An example transformation of elements of a relational data model to object-oriented data structures shows the approach in practice. Its focus is on data mappings and relationships.
This article presents the results of designing the structure of the data models necessary for the proper functioning of a database supporting the editorial, proofreading, printing, and publishing ...activities of an educational organization. The goal of the study is to create a coherent, objective, holistic, and non-redundant data set appropriate to the subject area to support the process of digital transformation of the publishing and publication sphere within an educational organization. The resulting data models reflect the logical structure of databases independent of the choice of a particular database management system. As part of the design, JavaScript Object Notation models and relational data models are developed to create a relational database. The results allow identifying and estimating the resources needed to implement each process, determining all participants affecting the implementation of the relevant processes, and defining the access to resources needed by certain participants in the process.
O artigo apresenta os resultados do desenho da estrutura dos modelos de dados necessários para o bom funcionamento de um banco de dados que suporta as atividades editoriais, de revisão, impressão e publicação de uma organização educacional. O objetivo do estudo é criar um conjunto de dados coerente, objetivo, holístico e não redundante adequado à área temática para apoiar o processo de transformação digital da esfera editorial e editorial dentro de uma organização educacional. Os modelos de dados resultantes refletem a estrutura lógica dos bancos de dados independentemente da escolha de um determinado sistema de gerenciamento de banco de dados. Como parte do design, os modelos JavaScript Object Notation e os modelos de dados relacionais são desenvolvidos para criar um banco de dados relacional. Os resultados permitem identificar e estimar os recursos necessários para implementar cada processo, determinar todos os participantes que afetam a implementação dos processos relevantes e definir o acesso aos recursos necessários para determinados participantes do processo.
El artículo presenta los resultados del diseño de la estructura de los modelos de datos necesarios para el correcto funcionamiento de una base de datos que soporta las actividades de edición, revisión, impresión y publicación de una organización educativa. El objetivo del estudio es crear un conjunto de datos coherente, objetivo, holístico y no redundante apropiado para el área temática para apoyar el proceso de transformación digital de la esfera editorial y de publicación dentro de una organización educativa. Los modelos de datos resultantes reflejan la estructura lógica de las bases de datos independientemente de la elección de un sistema de gestión de bases de datos en particular. Como parte del diseño, se desarrollan modelos de notación de objetos de JavaScript y modelos de datos relacionales para crear una base de datos relacional. Los resultados permiten identificar y estimar los recursos necesarios para implementar cada proceso, determinar todos los participantes que afectan la implementación de los procesos relevantes y definir el acceso a los recursos que necesitan ciertos participantes en el proceso.
ThespisTRX: Causally-Consistent Read Transactions Camilleri, Carl; Vella, Joseph G; Nezval, Vitezslav
International journal of information technology and web engineering,
01/2020, Letnik:
15, Številka:
1
Journal Article
Recenzirano
Data consistency defines how usable a data set is. Causal consistency is the strongest type of consistency that can be achieved when data is stored in multiple locations, and fault tolerance is ...desired. Thespis is a middleware that innovatively leverages the Actor model to implement causal consistency over a DBMS, whilst abstracting complexities for application developers behind a REST interface. Following the evaluation of correctness, performance and scalability of Thespis, it is illustrated how a business application can be guaranteed causal consistency, but still encounter Time-To-Check-Time-To-Use (TOCTOU) race conditions. The design and implementation of ThespisTRX is given, which builds upon, and extends, the Thespis middleware to offer read-only transaction capabilities, allowing clients to read a causally-consistent version of multiple data entities. A correctness analysis illustrates how ThespisTRX avoids TOCTOU race conditions, and empirical performance tests show that this can be achieved with minimal overheads.
Background: Conceptual models are an essential phase in software design, but they can create confusion and reduced performance for students in Database Design courses.
Objective: A novel Relational ...Data Model Validation Tool (MVTool) was developed and tested to determine (1) if students who use MVTool perform better than those who do not, and (2) if design skills improve after using MVTool.
Method: After a pre-test of database design skills, 68 students were divided into matched-pair control and experimental groups. All completed a database design task, with the experimental group having access to MVTool and the control group having no access to the tool.
Findings: Notable improvements in specific design skills could be consistently detected in students after the introduction of the tool.
Implications: Validation tools such as MVTool may help students to understand modeling languages and conventions used in database design, thereby improving their skill development and course outcomes.
The cost associated with making decisions based on poor-quality data is quite high. Consequently, the management of data quality and the quality of associated data management processes has become ...critical for organizations. An important first step in managing data quality is the ability to measure the quality of information products (derived data) based on the quality of the source data and associated processes used to produce the information outputs. We present a methodology to determine two data quality characteristicsaccuracy and completenessthat are of critical importance to decision makers. We examine how the quality metrics of source data affect the quality for information outputs produced using the relational algebra operations selection, projection, and Cartesian product. Our methodology is general, and can be used to determine how quality characteristics associated with diverse data sources affect the quality of the derived data.