The price gouging or price hike is a worldwide issue, and it is related to inflation. Because of rising prices, people in various countries cannot afford nutritious food or proper treatment. ...Sometimes shops, restaurants, and transportation service providers charge more than the prescribed product price from buyers. In addition, unauthorized VAT or Tax is taken on products that the government exempts. Another reason for price hikes is bribery, and it occurs in transporting and delivering goods. This article introduces a blockchain-based Internet of Things model to monitor product price hikes and corruption from the Industry 4.0 and blockchain 5.0 point of view. Industries produce and package different products. Wholesalers and retailers purchase products from industrial companies. The primary goal of this article is to propose a blockchain mechanism for monitoring price hikes and corruption where the government can monitor buying and selling between buyers and industrial companies. Here, we have established blockchain-integrated remote database model where blockchain relates to a relational database management system that uses remote database access protocol and Cloud server. This article presents the brief evolution of blockchain and industry generations. Finally, this article provides a next generation blockchain model. An intelligent government connected with Industry 4.0 monitors price hikes and corruption.
This paper concerns the paraconsistent logic LPQ
⊃,
F
and an application of it in the area of relational database theory. The notions of a relational database, a query applicable to a relational ...database, and a consistent answer to a query with respect to a possibly inconsistent relational database are considered from the perspective of this logic. This perspective enables among other things the definition of a consistent answer to a query with respect to a possibly inconsistent database without resort to database repairs. In an earlier paper, LPQ
⊃ ,
F
is presented with a sequent-style natural deduction proof system. In this paper, a sequent calculus proof system is presented instead because such proof systems are generally considered more suitable as the basis of proof search procedures than natural deduction proof systems and proof search procedures can serve as the core of algorithms for computing consistent answers to queries.
In relational database management systems (RDBMSs), an efficient join method for text retrieval using an inverted index has been developed and implemented. However, the existing crossing of the ...posting inverted list increases the keyword search time for large texts because of unnecessary comparisons. The relation-based search produces results by utilizing the posting list intersection. To reduce the search time for queries, a multi-way skip-merge join algorithm is proposed in this study. The proposed algorithm improves the execution speed by using a sorted inverted index posting list to minimize unnecessary comparison operations in the posting list intersection. The skip-merge join method, which minimizes unnecessary comparison operations using the aggregate function, is integrated with the multi-way join as a replacement for the existing two-way join method. The join algorithm combining skip-merge join and multi-way join shows good performance because the number of search keywords and the number of documents increase. The performance improvement of the keyword search is verified by implementing the multi-way skip-merge join algorithm in PostgreSQL, an RDBMS.
MVAR: A Mouse Variation Registry El Kassaby, Bahá; Castellanos, Francisco; Gerring, Matthew ...
Journal of molecular biology,
2024-Mar-06
Journal Article
Recenzirano
Odprti dostop
Display omitted
•MVAR aggregates and annotates genome variation from large-scale sequencing of different mouse strains and expertly curated variants for phenotypic alleles.•Variant annotation in MVAR ...includes variant type, molecular consequence, impact, and region.•Data in MVAR are accessible in both human- and machine- readable formats.•MVAR serves as both a stand-alone database of mouse genome variation and as a variant annotation service.•MVAR is a platform for facilitating genotype-phenotype associations in the laboratory mouse.•MVAR resource was implemented using a micro-services architecture, providing both interoperability and ease of software maintenance.
The Mouse Variation Registry (MVAR) resource is a scalable registry of mouse single nucleotide variants and small indels and variant annotation. The resource accepts data in standard Variant Call Format (VCF) and assesses the uniqueness of the submitted variants via a canonicalization process. Novel variants are assigned a unique, persistent MVAR identifier; variants that are equivalent to an existing variant in the resource are associated with the existing identifier. Annotations for variant type, molecular consequence, impact, and genomic region in the context of specific transcripts and protein sequences are generated using Ensembl’s Variant Effect Predictor (VEP) and Jannovar. Access to the data and annotations in MVAR are supported via an Application Programming Interface (API) and web application. Researchers can search the resource by gene symbol, genomic region, variant (expressed in Human Genome Variation Society syntax), refSNP identifiers, or MVAR identifiers. Tabular search results can be filtered by variant annotations (variant type, molecular consequence, impact, variant region) and viewed according to variant distribution across mouse strains. The registry currently comprises more than 99 million canonical single nucleotide variants for 581 strains of mice. MVAR is accessible from https://mvar.jax.org.
Global trade is plagued by slow and inefficient manual processes associated with physical documents. Firms are constantly looking for new ways to improve transparency and increase the resilience of ...their supply chains. This can be solved by the digitalisation of supply chains and the automation of document- and information-sharing processes. Blockchain is touted as a solution to these issues due to its unique combination of features, such as immutability, decentralisation and transparency. A lack of business cases that quantify the costs and benefits causes uncertainty regarding the truth of these claims. This paper explores how the costs and benefits of a blockchain-based solution for digitalising and automating documentation flows in cross-border supply chains compare to a conventional centralised relational database solution. The research described in this paper uses primary data collected through semi-structured interviews with industry experts, as well as secondary data from literature. Two models based on existing services were developed and the costs and benefits compared and then analysed using the Architecture Trade-off Analysis Method (ATAM) and the Analytic Network Process (ANP). Findings from the analysis show that a consortium blockchain solution like TradeLens is the favourable solution for digitalising and automating information flows in cross-border supply chains.
The data produced by various services should be stored and managed in an appropriate format for gaining valuable knowledge conveniently. This leads to the emergence of various data models, including ...relational, semi-structured, and graph models, and so on. Considering the fact that the mature relational databases established on relational data models are still predominant in today’s market, it has fueled interest in storing and processing semi-structured data and graph data in relational databases so that mature and powerful relational databases’ capabilities can all be applied to these various data. In this survey, we review existing methods on mapping semi-structured data and graph data into relational tables, analyze their major features, and give a detailed classification of those methods. We also summarize the merits and demerits of each method, introduce open research challenges, and present future research directions. With this comprehensive investigation of existing methods and open problems, we hope this survey can motivate new mapping approaches through drawing lessons from each model’s mapping strategies, as well as a new research topic - mapping multi-model data into relational tables.
The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in ...the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system.
One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered.
Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency.
Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Operational NoSQL systems are relatively new in the data-management ecosystem, and there is much confusion about their capabilities and how they differ from traditional relational database systems. ...This summary of characteristics clearly distinguishes the two system classes and provides a glimpse into directions for future work.
Document Spanners Fagin, Ronald; Kimelfeld, Benny; Reiss, Frederick ...
Journal of the ACM,
05/2015, Letnik:
62, Številka:
2
Journal Article
Recenzirano
An intrinsic part of information extraction is the creation and manipulation of relations extracted from text. In this article, we develop a foundational framework where the central construct is what ...we call a document spanner (or just spanner for short). A spanner maps an input string into a relation over the spans (intervals specified by bounding indices) of the string. The focus of this article is on the representation of spanners. Conceptually, there are two kinds of such representations. Spanners defined in a primitive representation extract relations directly from the input string; those defined in an algebra apply algebraic operations to the primitively represented spanners. This framework is driven by SystemT, an IBM commercial product for text analysis, where the primitive representation is that of regular expressions with capture variables. We define additional types of primitive spanner representations by means of two kinds of automata that assign spans to variables. We prove that the first kind has the same expressive power as regular expressions with capture variables; the second kind expresses precisely the algebra of the regular spanners-the closure of the first kind under standard relational operators. The core spanners extend the regular ones by string-equality selection (an extension used in SystemT). We give some fundamental results on the expressiveness of regular and core spanners. As an example, we prove that regular spanners are closed under difference (and complement), but core spanners are not. Finally, we establish connections with related notions in the literature.
NADEEF Dallachiesa, Michele; Ebaid, Amr; Eldawy, Ahmed ...
Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data,
06/2013
Conference Proceeding
Despite the increasing importance of data quality and the rich theoretical and practical contributions in all aspects of data cleaning, there is no single end-to-end off-the-shelf solution to ...(semi-)automate the detection and the repairing of violations w.r.t. a set of heterogeneous and ad-hoc quality constraints. In short, there is no commodity platform similar to general purpose DBMSs that can be easily customized and deployed to solve application-specific data quality problems. In this paper, we present NADEEF, an extensible, generalized and easy-to-deploy data cleaning platform. NADEEF distinguishes between a programming interface and a core to achieve generality and extensibility. The programming interface allows the users to specify multiple types of data quality rules, which uniformly define what is wrong with the data and (possibly) how to repair it through writing code that implements predefined classes. We show that the programming interface can be used to express many types of data quality rules beyond the well known CFDs (FDs), MDs and ETL rules. Treating user implemented interfaces as black-boxes, the core provides algorithms to detect errors and to clean data. The core is designed in a way to allow cleaning algorithms to cope with multiple rules holistically, i.e. detecting and repairing data errors without differentiating between various types of rules. We showcase two implementations for core repairing algorithms. These two implementations demonstrate the extensibility of our core, which can also be replaced by other user-provided algorithms. Using real-life data, we experimentally verify the generality, extensibility, and effectiveness of our system.