In the rapidly evolving landscape of scientific semiconductor laboratories (commonly known as, cleanrooms), integrated with Internet of Things (IoT) technology and Cyber-Physical Systems (CPSs), ...several factors including operational changes, sensor aging, software updates and the introduction of new processes or equipment can lead to dynamic and non-stationary data distributions in evolving data streams. This phenomenon, known as concept drift, poses a substantial challenge for traditional data-driven digital twin static machine learning (ML) models for anomaly detection and classification. Subsequently, the drift in normal and anomalous data distributions over time causes the model performance to decay, resulting in high false alarm rates and missed anomalies. To address this issue, we present TWIN-ADAPT, a continuous learning model within a digital twin framework designed to dynamically update and optimize its anomaly classification algorithm in response to changing data conditions. This model is evaluated against state-of-the-art concept drift adaptation models and tested under simulated drift scenarios using diverse noise distributions to mimic real-world distribution shift in anomalies. TWIN-ADAPT is applied to three critical CPS datasets of Smart Manufacturing Labs (also known as “Cleanrooms”): Fumehood, Lithography Unit and Vacuum Pump. The evaluation results demonstrate that TWIN-ADAPT’s continual learning model for optimized and adaptive anomaly classification achieves a high accuracy and F1 score of 96.97% and 0.97, respectively, on the Fumehood CPS dataset, showing an average performance improvement of 0.57% over the offline model. For the Lithography and Vacuum Pump datasets, TWIN-ADAPT achieves an average accuracy of 69.26% and 71.92%, respectively, with performance improvements of 75.60% and 10.42% over the offline model. These significant improvements highlight the efficacy of TWIN-ADAPT’s adaptive capabilities. Additionally, TWIN-ADAPT shows a very competitive performance when compared with other benchmark drift adaptation algorithms. This performance demonstrates TWIN-ADAPT’s robustness across different modalities and datasets, confirming its suitability for any IoT-driven CPS framework managing diverse data distributions in real time streams. Its adaptability and effectiveness make it a versatile tool for dynamic industrial settings.
Celotno besedilo
Dostopno za:
CEKLJ, DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, UILJ, UKNU, UL, UM, UPUK
In recent years, virtual reality and augmented reality applications have seen a significant increase in popularity. This is due to multiple technology trends. First, the availability of new tethered ...and wireless head-mounted displays allows viewers to consume new types of content. Second, 360° omnidirectional cameras, in combination with production software, make it easier to produce personalized 360° videos. Third, beyond these new developments for creating and consuming such content, video sharing websites and social media platforms enable users to publish and view 360° video content. In this paper, we present challenges of 360° video streaming systems, give an overview of existing approaches for 360° video streaming, and outline research opportunities enabled by 360° video. We focus on the data model for 360° video and the different challenges and approaches of creating, distributing, and presenting 360° video content, including 360° video recording, storage, distribution, edge delivery, and quality-of-experience evaluation. In addition, we identify major research opportunities with respect to efficient storage, timely distribution, and cybersickness-free personalized viewing of 360° videos.
This paper studies energy efficient routing for data aggregation in wireless sensor networks. Our goal is to maximize the lifetime of the network, given the energy constraint on each sensor node. ...Using linear programming (LP) formulation, we model this problem as a multicommodity flow problem, where a commodity represents the data generated from a sensor node and delivered to a base station. A fast approximate algorithm is presented, which is able to compute (1-?)-approximation to the optimal lifetime for any ? > 0. Then along this baseline, we further study several advanced topics. First, we design an algorithm, which utilizes the unique characteristic of data aggregation, and is proved to reduce the running time of the fastest existing algorithm by a factor of K, K being the number of commodities. Second, we extend our algorithm to accommodate the same problem in the setting of multiple base stations, and study its impact on network lifetime improvement. All algorithms are evaluated through both solid theoretical analysis and extensive simulation results. PUBLICATION ABSTRACT
The recent proliferation of human-carried mobile devices has given rise to mobile crowd sensing (MCS) systems that outsource the collection of sensory data to the public crowd equipped with various ...mobile devices. A fundamental issue in such systems is to effectively incentivize worker participation. However, instead of being an isolated module, the incentive mechanism usually interacts with other components which may affect its performance, such as data aggregation component that aggregates workers' data and data perturbation component that protects workers' privacy. Therefore, different from the past literature, we capture such interactive effect and propose INCEPTION, a novel MCS system framework that integrates an incentive, a data aggregation, and a data perturbation mechanism. Specifically, its incentive mechanism selects workers who are more likely to provide reliable data and compensates their costs for both sensing and privacy leakage. Its data aggregation mechanism also incorporates workers' reliability to generate highly accurate aggregated results, and its data perturbation mechanism ensures satisfactory protection for workers' privacy and desirable accuracy for the final perturbed results. We validate the desirable properties of INCEPTION through theoretical analysis as well as extensive simulations.
This study describes the features and utility of a novel augmented reality based telemedicine system with haptics that allows the sense of touch and direct physical examination during a synchronous ...immersive telemedicine consultation and physical examination. The system employs novel engineering features: (a) a new force enhancement algorithm to improve force rendering and overcoming the "just-noticeable-difference" limitation; (b) an improved force compensation method to reduce the delay in force rendering; (c) use of the "haptic interface point" to reduce disparity between the visual and haptic data; and (d) implementation of efficient algorithms to process, compress, decompress, transmit and render 3-D tele-immersion data. A qualitative pilot study (n=20) evaluated the usability of the system. Users rated the system on a 26-question survey using a seven-point Likert scale, with percent agreement calculated from the total users who agreed with a given statement. Survey questions fell into three main categories: (1) ease and simplicity of use, (2) quality of experience, and (3) comparison to in-person evaluation. Average percent agreements between the telemedicine and in-person evaluation were highest for ease and simplicity of use (86%) and quality of experience (85%), followed by comparison to in-person evaluation (58%). Eighty-nine percent (89%) of respondents expressed satisfaction with the overall quality of experience. Results suggest that the system was effective at conveying audio-visual and touch data in real-time across 20.3 miles, and warrants further development.
This article presents results from our measurement and modeling efforts on the large-scale peer-to-peer (p2p) overlay graphs spanned by the PPLive system, the most popular and largest p2p IPTV ...(Internet Protocol Television) system today. Unlike other previous studies on PPLive, which focused on either network-centric or user-centric measurements of the system, our study is unique in (a) focusing on PPLive overlay-specific characteristics, and (b) being the first to derive mathematical models for its distributions of node degree, session length, and peer participation in simultaneous overlays.
Our studies reveal characteristics of multimedia streaming p2p overlays that are markedly different from existing file-sharing p2p overlays. Specifically, we find that: (1) PPLive overlays are similar to random graphs in structure and thus more robust and resilient to the massive failure of nodes, (2) Average degree of a peer in the overlay is independent of the channel population size and the node degree distribution can be fitted by a piecewise function, (3) The availability correlation between PPLive peer pairs is bimodal, that is, some pairs have highly correlated availability, while others have no correlation, (4) Unlike p2p file-sharing peers, PPLive peers are impatient and session lengths (discretized, per channel) are typically geometrically distributed, (5) Channel population size is time-sensitive, self-repeated, event-dependent, and varies more than in p2p file-sharing networks, (6) Peering relationships are slightly locality-aware, and (7) Peer participation in simultaneous overlays follows a Zipf distribution. We believe that our findings can be used to understand current large-scale p2p streaming systems for future planning of resource usage, and to provide useful and practical hints for future design of large-scale p2p streaming systems.
This paper presents
GRACE-OS
, an energy-efficient soft real-time CPU scheduler for mobile devices that primarily run multimedia applications. The major goal of GRACE-OS is to support application ...quality of service and save energy. To achieve this goal, GRACE-OS integrates dynamic voltage scaling into soft real-time scheduling and decides how fast to execute applications in addition to when and how long to execute them. GRACE-OS makes such scheduling decisions based on the probability distribution of application cycle demands, and obtains the demand distribution via online profiling and estimation. We have implemented GRACE-OS in the Linux kernel and evaluated it on an HP laptop with a variable-speed CPU and multimedia codecs. Our experimental results show that (1) the demand distribution of the studied codecs is stable or changes smoothly. This stability implies that it is feasible to perform stochastic scheduling and voltage scaling with low overhead; (2) GRACE-OS delivers soft performance guarantees by bounding the deadline miss ratio under application-specific requirements; and (3) GRACE-OS reduces CPU idle time and spends more busy time in lower-power speeds. Our measurement indicates that compared to deterministic scheduling and voltage scaling, GRACE-OS saves energy by 7% to 72% while delivering statistical performance guarantees.
The recent proliferation of human-carried mobile devices has given rise to mobile crowd sensing (MCS) systems that outsource sensory data collection to the public crowd. In order to identify truthful ...values from (crowd) workers' noisy or even conflicting sensory data, truth discovery algorithms, which jointly estimate workers' data quality and the underlying truths through quality-aware data aggregation, have drawn significant attention. However, the power of these algorithms could not be fully unleashed in MCS systems, unless workers' strategic reduction of their sensing effort is properly tackled. To address this issue, in this paper, we propose a payment mechanism, named Theseus, that deals with workers' such strategic behavior, and incentivizes high-effort sensing from workers. We ensure that, at the Bayesian Nash Equilibrium of the non-cooperative game induced by Theseus, all participating workers will spend their maximum possible effort on sensing, which improves their data quality. As a result, the aggregated results calculated subsequently by truth discovery algorithms based on workers' data will be highly accurate. Additionally, Theseus bears other desirable properties, including individual rationality and budget feasibility. We validate the desirable properties of Theseus through theoretical analysis, as well as extensive simulations.
Data fusion or information collection is one of the fundamental functions in the future cyber-physical systems. But, privacy concerns must be addressed and security must be assured in such systems. ...It is very challenging to achieve the synergy of privacy and integrity, because privacy preserving schemes try to hide or interfere with data, while integrity protection usually needs to enable peer monitoring or public access of the data. Therefore, privacy and integrity can be the conflicting requirements, one may barricade the implementation of the other.In this paper, we address both privacy of individual sensory data and integrity of aggregation result simultaneously by proposing a protocol called iCPDA, which piggybacks on a cluster-based privacy-preserving data aggregation protocol(CPDA). We implement the add-on feature to protect integrity of aggregation result. To show the efficacy and efficiency of the proposed scheme, we present simulation results. To the best of our knowledge, this paper is among the first protocols to preserve privacy and integrity in data aggregation.
The requirements for the security of the network communication in critical infrastructures have been more focused on the availability of the data rather than the integrity and the confidentiality. ...The availability of communication in IEC 61850 substations can be hindered by Generic Object Oriented Substation Event (GOOSE) poisoning attacks that might result in threats such as Denial of Service (DoS) or flooding attacks. In order to accurately detect similar attacks, a novel method for the Early Detection of Attacks for GOOSE Network Traffic (EDA4GNeT) is developed in the present work. The EDA4GNeT method considers the dynamic behavior of network traffic in electrical substations. A mathematical modeling of GOOSE network traffic is adopted for the anomaly detection based on statistical hypothesis testing. The developed mathematical model of the communication traffic can also support the management of the network architecture in IEC 61850 substations based on appropriate performance studies. To test the novel anomaly detection method and compare the obtained results with related works found in the literature, a simulation of a DoS attack against a 66/11kV substation with several experiments is used as a case study.