Hazards potentially affect the safety of people on construction sites include falls from heights (FFH), trench and scaffold collapse, electric shock and arc flash/arc blast, and failure to use proper ...personal protective equipment. Such hazards are significant contributors to accidents and fatalities. Computer vision has been used to automatically detect safety hazards to assist with the mitigation of accidents and fatalities. However, as safety regulations are subject to change and become more stringent prevailing computer vision approaches will become obsolete as they are unable to accommodate the adjustments that are made to practice. This paper integrates computer vision algorithms with ontology models to develop a knowledge graph that can automatically and accurately recognise hazards while adhering to safety regulations, even when they are subjected to change. Our developed knowledge graph consists of: (1) an ontological model for hazards: (2) knowledge extraction; and (3) knowledge inference for hazard identification. We focus on the detection of hazards associated with FFH as an example to illustrate our proposed approach. We also demonstrate that our approach can successfully detect FFH hazards in varying contexts from images.
•A knowledge graph is developed to automatically identify hazards.•Computer vision algorithms and ontology are used to develop knowledge graph.•Examples are used to illustrate the feasibility of the proposed approach.
Graph databases have gained widespread adoption in various industries and have been utilized in a range of applications, including financial risk assessment, commodity recommendation, and data ...lineage tracking. While the principles and design of these databases have been the subject of some investigation, there remains a lack of comprehensive examination of aspects such as storage layout, query language, and deployment. The present study focuses on the design and implementation of graph storage layout, with a particular emphasis on tree-structured key-value stores. We also examine different design choices in the graph storage layer and present our findings through the development of TuGraph, a highly efficient single-machine graph database that significantly outperforms well-known Graph DataBase Management System (GDBMS). Additionally, TuGraph demonstrates superior performance in the Linked Data Benchmark Council (LDBC) Social Network Benchmark (SNB) interactive benchmark.
Object Graph Programming Thimmaiah, Aditya; Lampropoulos, Leonidas; Rossbach, Christopher ...
Proceedings of the 46th IEEE/ACM International Conference on Software Engineering,
02/2024
Conference Proceeding
Odprti dostop
We introduce Object Graph Programming (OGO), which enables reading and modifying an object graph (i.e., the entire state of the object heap) via declarative queries. OGO models the objects and their ...relations in the heap as an object graph thereby treating the heap as a graph database: each node in the graph is an object (e.g., an instance of a class or an instance of a metadata class) and each edge is a relation between objects (e.g., a field of one object references another object). We leverage Cypher, the most popular query language for graph databases, as OGO's query language. Unlike LINQ, which uses collections (e.g., List) as a source of data, OGO views the entire object graph as a single "collection". OGO is ideal for querying collections (just like LINQ), introspecting the runtime system state (e.g., finding all instances of a given class or accessing fields via reflection), and writing assertions that have access to the entire program state. We prototyped OGO for Java in two ways: (a) by translating an object graph into a Neo4j database on which we run Cypher queries, and (b) by implementing our own in-memory graph query engine that directly queries the object heap. We used OGO to rewrite hundreds of statements in large open-source projects into OGO queries. We report our experience and performance of our prototypes.
RDF question/answering (Q/A) allows users to ask questions in natural languages over a knowledge base represented by RDF. To answer a natural language question, the existing work takes a two-stage ...approach: question understanding and query evaluation. Their focus is on question understanding to deal with the disambiguation of the natural language phrases. The most common technique is the joint disambiguation, which has the exponential search space. In this paper, we propose a systematic framework to answer natural language questions over RDF repository (RDF Q/A) from a graph data-driven perspective. We propose a semantic query graph to model the query intention in the natural language question in a structural way, based on which, RDF Q/A is reduced to subgraph matching problem. More importantly, we resolve the ambiguity of natural language questions at the time when matches of query are found. The cost of disambiguation is saved if there are no matching found. More specifically, we propose two different frameworks to build the semantic query graph, one is relation (edge)-first and the other one is node-first. We compare our method with some state-of-the-art RDF Q/A systems in the benchmark dataset. Extensive experiments confirm that our method not only improves the precision but also speeds up query performance greatly.
This paper reports on findings from a project on information integration from multiple Business Information Systems with the help of a user-specific Enterprise Knowledge Graph. Most ERP systems ...currently in use store information objects in relational databases. Research in Web Sciences has shown that graph structures present information in a more intuitive way that is easier to interpret for humans. Following a DSR approach, we developed a concept for storing an ontology in a graph database that allows us to map ERP objects and load them at runtime. This allows the end user to navigate through the graph structure, thus providing an intuitive and quick access to essential job-related information. We evaluated the suggested concept with a prototype following the paradigm of polyglot persistence; the prototype was equipped with a graph database to store the company-specific ontology in its native form. The program code was encapsulated into a separate module following a service-oriented software design.
The paper is focused on a functional querying in graph databases. We consider labelled property graph model and mention also the graph model behind XML databases. An attention is devoted to ...functional modelling of graph databases both at a conceptual and data level. The notions of graph conceptual schema and graph database schema are considered. The notion of a typed attribute is used as a basic structure both on the conceptual and database level. As a formal approach to declarative graph database querying a version of typed lambda calculus is used. This approach allows to use a logic necessary for querying, arithmetic as well as aggregation function. Another advantage is the ability to deal with relations and graphs in one integrated environment.
Big Data is a research area where many different disciplines work together. Social media has grown in popularity as a tool for disseminating and gathering information. However, the success of social ...media like Twitter, Facebook, etc., has not only attracted genuine users but also spammers who utilize social graphs, famous phrases, and hashtags to spread malware. This study uses several social network analysis and visualization methods based on bibliometric data from the Web of Science to look at the structure and patterns of interdisciplinary collaborations and the latest emerging overall practice. For a better understanding of spamming behaviors on Twitter, the Twitter data set is thoroughly analyzed, and categorized into Spam and Non-Spam classifications. Earlier studies confined their scope to investigating the most negatively influential spammers by blocking the most influential spammers. However, the cumulative impact of other spammers having low individual negative influence values but higher impact values was neglected. In this article, we develop an algorithm for detecting social spam using Node Rank-based Influence Minimization (NRIM), which integrates Node Rank with the impact value of spam. The proposed spam influence minimization model also identifies spam-influential users and aids in limiting the flow of spam tweets within the Twitter network. Additionally, a detection algorithm for influential communities has been proposed to limit the spread of spam content through influential communities on the Twitter network. The primary focus of this paper is to reduce the spam impact on Twitter data by identifying influential spammers using the Node_Rank-based Influence Minimization (NRIM) algorithm. To begin, the tweets are classified into spam and non-spam using a machine learning algorithm. Furthermore, the spam observed in the Graph is analyzed, and the Spammer is passed through the NRIM algorithm to find the influential Spammers. In addition to this, the negative impact of the Spammer is reduced on the Twitter graph, and its impact is analyzed on query processing executed on Graph. The technique used for the minimization of the Spammer’s negative effect on the graph reduces the query execution time by 12%.
•A novel taxonomy and thorough review providing classification of spam and non spam.•Spam are discovered and the impact of influence on their spread is the main emphasis.•The spam influential value is determined using the NRIM algorithm.•The impact of the spam is reduced by Influence-based Spam reduction strategy.•Real-time statistics used in assessment, proposed method outperforms earlier method.
Graph databases are nowadays considered the most appropriate solution for highly connected domains. Nevertheless, the lack of a fixed schema perplexes the implementation of business rules and ...inhibits the usage of graph database technology in practical use cases. To tackle this challenge, we study cardinality constraints in graph databases, as their focus is on the essential component of the property graph data model - relationships between entities. This paper presents an abstract cardinality constraints model for enforcing cardinality constraints in graph databases, which represent complex graph patterns. We extend our initial k-vertex cardinality constraints model to allow the representation of cardinality constraints between a node and a subgraph with given node and/or edge properties. Second, we implement the proposed model as procedures deployed in the Neo4j Graph Database Management System (GDBMS) to prevent adding new edges that violate k-vertex cardinality constraints to the graph database. Finally, we study the performance of the implemented approach on synthetic and real datasets and analyze its performance compared to the initial model. Overall, the query execution time (QET) of the procedure increases exponentially on larger datasets. Still, the added node/edge property-level evaluation does not show a significant performance effect on the edge insertion process.
This roadmap identifies two developments for improving the process of literature reviewing. First, a method for systematically digitally encoding papers' core knowledge contributions in the form of a ...graph is proposed. Second, the creativity literature is reviewed as a source of inspiration for crafting theoretical contributions.