'Wikification of GIS by the masses' is a phrase-term first coined by Kamel Boulos in 2005, two years earlier than Goodchild's term 'Volunteered Geographic Information'. Six years later (2005-2011), ...OpenStreetMap and Google Earth (GE) are now full-fledged, crowdsourced 'Wikipedias of the Earth' par excellence, with millions of users contributing their own layers to GE, attaching photos, videos, notes and even 3-D (three dimensional) models to locations in GE. From using Twitter in participatory sensing and bicycle-mounted sensors in pervasive environmental sensing, to creating a 100,000-sensor geo-mashup using Semantic Web technology, to the 3-D visualisation of indoor and outdoor surveillance data in real-time and the development of next-generation, collaborative natural user interfaces that will power the spatially-enabled public health and emergency situation rooms of the future, where sensor data and citizen reports can be triaged and acted upon in real-time by distributed teams of professionals, this paper offers a comprehensive state-of-the-art review of the overlapping domains of the Sensor Web, citizen sensing and 'human-in-the-loop sensing' in the era of the Mobile and Social Web, and the roles these domains can play in environmental and public health surveillance and crisis/disaster informatics. We provide an in-depth review of the key issues and trends in these areas, the challenges faced when reasoning and making decisions with real-time crowdsourced data (such as issues of information overload, "noise", misinformation, bias and trust), the core technologies and Open Geospatial Consortium (OGC) standards involved (Sensor Web Enablement and Open GeoSMS), as well as a few outstanding project implementation examples from around the world.
The rapid development in manufacturing industries due to the introduction of IIoT devices has led to the emergence of Industry 4.0 which results in an industry with intelligence, increased efficiency ...and reduction in the cost of manufacturing. However, the introduction of IIoT devices opens up the door for a variety of cyber threats in smart industries. The detection of cyber threats against such extensive, complex, and heterogeneous smart manufacturing industries is very challenging due to the lack of sufficient attack traces. Therefore, in this work, a Federated Learning enabled Deep Intrusion Detection framework is proposed to detect cyber threats in smart manufacturing industries. The proposed FLDID framework allows multiple smart manufacturing industries to build a collaborative model to detect threats and overcome the limited attack example problem with individual industries. Moreover, to ensure the privacy of model gradients, Paillier-based encryption is used in communication between edge devices (representative of smart industries) and the server. The deep learning-based hybrid model, which consists of a Convolutional Neural Network, Long Short Term Memory, and Multi-Layer Perceptron is used in the intrusion detection model. An exhaustive set of experiments on the publically available dataset proves the effectiveness of the proposed framework for detecting cyber threats in smart industries over the state-of-the-art approaches.
Forestry 4.0 is inspired by the Industry 4.0 concept, which plays a vital role in the next industrial generation revolution. It is ushering in a new era for efficient and sustainable forest ...management. Environmental sustainability and climate change are related challenges to promote sustainable forest management of natural resources. Internet of Forest Things (IoFT) is an emerging technology that helps manage forest sustainability and protect forest from hazards via distributing smart devices for gathering data stream during monitoring and detecting fire. Stream processing is a well-known research area, and recently, it has gained a further significance due to the emergence of IoFT devices. Distributed stream processing platforms have emerged, e.g., Apache Flink, Storm, and Spark, etc. Querying windowing is the heart of any stream-processing platform which splits infinite data stream into chunks of finite data to execute a query. Dynamic query window-based processing can reduce the reporting time in case of missing and delayed events caused by data drift.In this paper, we present a novel dynamic mechanism to recommend the optimal window size and type based on the dynamic context of IoFT application. In particular, we designed a dynamic window selector for stream queries considering input stream data characteristics, application workload and resource constraints to recommend the optimal stream query window configuration. A research gap on the likelihood of adopting smart IoFT devices in environmental sustainability indicates a lack of empirical studies to pursue forest sustainability, i.e., sustainable forestry applications. So, we focus on forest fire management and detection as a use case of Forestry 4.0, one of the dynamic environmental management challenges, i.e., climate change, to deliver sustainable forestry goals. According to the dynamic window selector's experimental results, end-to-end latency time for the reported fire alerts has been reduced by dynamical adaptation of window size with IoFT stream rate changes.
With the growing popularity of microblogging services such as Twitter in recent years, an increasing number of users are using these services in their daily lives. The huge volume of information ...generated by users raises new opportunities in various applications and areas. Inferring user interests plays a significant role in providing personalized recommendations on microblogging services, and also on third-party applications providing social logins via these services, especially in cold-start situations. In this survey, we review user modeling strategies with respect to inferring user interests from previous studies. To this end, we focus on four dimensions of inferring user interest profiles: (1)
data collection
, (2)
representation
of user interest profiles, (3)
construction and enhancement
of user interest profiles, and (4) the
evaluation
of the constructed profiles. Through this survey, we aim to provide an overview of state-of-the-art user modeling strategies for inferring user interest profiles on microblogging social networks with respect to the four dimensions. For each dimension, we review and summarize previous studies based on specified criteria. Finally, we discuss some challenges and opportunities for future work in this research domain.
In recent years, due to technological advancements, the concept of Industry 4.0 (I4.0) is gaining popularity, while presenting several technical challenges being tackled by both the industrial and ...academic research communities. Semantic Web including Knowledge Graphs is a promising technology that can play a significant role in realizing I4.0 implementations. This paper surveys the use of the Semantic Web and Knowledge Graphs for I4.0 from different perspectives such as managing information related to equipment maintenance, resource optimization, and the provision of on-time and on-demand production and services. Moreover, to solve the challenges of limited depth and expressiveness in the current ontologies, we have proposed an enhanced reference generalized ontological model (RGOM) based on Reference Architecture Model for I4.0 (RAMI 4.0). RGOM can facilitate a range of I4.0 concepts including improved asset monitoring, production enhancement, reconfiguration of resources, process optimizations, product orders and deliveries, and the life cycle of products. Our proposed RGOM can be used to generate a knowledge graph capable of providing answers in response to any real-time query.
The Social Semantic Web Breslin, John G; Passant, Alexandre; Decker, Stefan
2009, 20090919, 2009-10-16
eBook
The Social Web (including services such as MySpace, Flickr, last.fm, and WordPress) has captured the attention of millions of users as well as billions of dollars in investment and acquisition. ...Social websites, evolving around the connections between people and their objects of interest, are encountering boundaries in the areas of information integration, dissemination, reuse, portability, searchability, automation and demanding tasks like querying. The Semantic Web is an ideal platform for interlinking and performing operations on diverse person- and object-related data available from the Social Web, and has produced a variety of approaches to overcome the boundaries being experienced in Social Web application areas. After a short overview of both the Social Web and the Semantic Web, Breslin et al. describe some popular social media and social networking applications, list their strengths and limitations, and describe some applications of Semantic Web technology to address their current shortcomings by enhancing them with semantics. Across these social websites, they demonstrate a twofold approach for interconnecting the islands that are social websites with semantic technologies, and for powering semantic applications with rich community-created content. They conclude with observations on how the application of Semantic Web technologies to the Social Web is leading towards the `Social Semantic Web` (sometimes also called `Web 3.0`), forming a network of interlinked and semantically-rich content and knowledge. The book is intended for computer science professionals, researchers, and graduates interested in understanding the technologies and research issues involved in applying Semantic Web technologies to social software. Practitioners and developers interested in applications such as blogs, social networks or wikis will also learn about methods for increasing the levels of automation in these forms of Web communication.
With the advent of Industry 4.0 (I4.0) leading to the proliferation of industrial process data, deep learning (DL) techniques have become instrumental in developing intelligent fault diagnosis (FD) ...applications. However, despite their potentially superior process monitoring capabilities, DL-based FD models are poorly calibrated and generate point estimate predictions without the associated uncertainty estimates. For DL-based FD models, accurate predictive uncertainty estimates from well-calibrated models are essential in ensuring industrial process safety and reliability. This article proposes ensemble-to-distribution (E2D), an uncertainty-aware combination method for quality monitoring FD based on an ensemble of deep neural networks. First, E2D addresses safety by providing accurate uncertainty estimates on model predictions, enabling informed decision-making to minimize operational risks. Second, E2D improves model performance on out-of-distribution detection tasks to facilitate deployments in the real world. Third, E2D is a post hoc application, implementable at inference time, and compatible with diverse pretrained models. Finally, to demonstrate the effectiveness of E2D, we explore the problem of monitoring the stability of industrial processes and product quality using case studies on the steel plates faults and APS failure at Scania trucks datasets.
5G emerges as the bedrock for the Industrial Internet of Things (IIoT), it facilitates the seamless, low-latency fusion of artificial intelligence and cloud computing, thereby fortifying the entire ...industrial procedure within a framework of smart and intelligent IIoT ecosystems. Concurrently, the continuously changing landscape of cybersecurity threats in the realm of the Internet of Things (IoT) is giving rise to unparalleled security complexities. These challenges are particularly pronounced in the context of zero-day attacks, and integration of 5G technology further exacerbates the intricacy of the situation. Thus this paper introduces a cutting-edge 5G-enabled framework for cyberthreat detection leveraging Federated Learning (FL) without the need for data sharing. It employs a dual Autoencoder (AE) based model. Distinctly, our model utilizes two synchronized AEs for each client, integral to FL mechanism. While one AE evaluates the IIoT environment based on normal network patterns, another focuses on attack scenarios. For decisive threat assessment, the system uses the capabilities of a one-class SVM classifier with AEs. Furthermore, our method ensures a synergistic blend of self-learning and collaborative learning by implementing a polling mechanism between overarching AE classifier and those tailored to individual client data and counters zero-day threats and out performs traditional AI/ML techniques.
The proliferation of sensing technologies such as sensors has resulted in vast amounts of time-series data being produced by machines in industrial plants and factories. There is much information ...available that can be used to predict machine breakdown and degradation in a given factory. The downtime of industrial equipment accounts for heavy losses in revenue that can be reduced by making accurate failure predictions using the sensor data. Internet of Things (IoT) technologies have made it possible to collect sensor data in real time. We found that hybrid modelling can result in efficient predictions as they are capable of capturing the abstract features which facilitate better predictions. In addition, developing effective optimization strategy is difficult because of the complex nature of different sensor data in real time scenarios. This work proposes a method for multivariate time-series forecasting for predictive maintenance (PdM) based on a combination of convolutional neural networks and long short term memory with skip connection (CNN-LSTM). We experiment with CNN, LSTM, and CNN-LSTM forecasting models one by one for the prediction of machine failures. The data used in this experiment are from Microsoft’s case study. The dataset provides information about the failure history, maintenance history, error conditions, and machine features and telemetry, which consists of information such as voltage, pressure, vibration, and rotation sensor values recorded between 2015 and 2016. The proposed hybrid CNN-LSTM framework is a two-stage end-to-end model in which the LSTM is leveraged to analyze the relationships among different time-series data variables through its memory function, and 1-D CNNs are responsible for effective extraction of high-level features from the data. Our method learns the long-term patterns of the time series by extracting the short-term dependency patterns of different time-series variables. In our evaluation, CNN-LSTM provided the most reliable and highest prediction accuracy.
Recently, many approaches have been proposed to manage sensor data using semantic web technologies for effective heterogeneous data integration. However, our empirical observations revealed that ...these solutions primarily focused on semantic relationships and unfortunately paid less attention to spatio–temporal correlations. Most semantic approaches do not have spatio–temporal support. Some of them have attempted to provide full spatio–temporal support, but have poor performance for complex spatio–temporal aggregate queries. In addition, while the volume of sensor data is rapidly growing, the challenge of querying and managing the massive volumes of data generated by sensing devices still remains unsolved. In this article, we introduce EAGLE, a spatio–temporal query engine for querying sensor data based on the linked data model. The ultimate goal of EAGLE is to provide an elastic and scalable system which allows fast searching and analysis with respect to the relationships of space, time and semantics in sensor data. We also extend SPARQL with a set of new query operators in order to support spatio–temporal computing in the linked sensor data context.