Edge services provide an effective and superior means of real-time transmissions and rapid processing of information in the Industrial Internet of Things (IIoT). However, the continuous increase of ...the number of smart devices results in privacy leakage and insufficient model accuracy of edge services. To tackle these challenges, in this article, we propose a blockchain-based machine learning framework for edge services (BML-ES) in IIoT. Specifically, we construct novel smart contracts to encourage multiparty participation of edge services to improve the efficiency of data processing. Moreover, we propose an aggregation strategy to verify and aggregate model parameters to ensure the accuracy of decision tree models. Finally, based on the SM2 public key cryptosystem, we protect data security and prevent data privacy leakage in edge services. Theoretical analysis and simulation experiments indicate that the BML-ES framework is secure, effective, and efficient, and is better suitable to improve the accuracy of edge services in IIoT.
Currently, edge computing (EC), emerging as a burgeoning paradigm, is powerful in handling real-time resource provision for Internet of Things (IoT) applications. However, due to the spatial ...distribution of geographically sparse IoT devices and the resource limitations of EC units (ECUs), the resource utilization of corresponding edge servers is relatively insufficient and the execution performance is ineffective to some extent. A privacy leakage, including personal information, location, media data, etc., during the transmission process from IoT devices to edge servers severely restricts the application of ECUs in IoT. To address these challenges, a two-phase offloading optimization strategy is put forward for joint optimization of offloading utility and privacy in EC enabled IoT. Technically, a utility-aware task offloading method, named UTO, is devised first to obtain the goal of maximizing the resource utilization of ECUs and minimizing the implementation time cost. Then a joint optimization method, named JOM, for utility and privacy tradeoffs is designed to balance the privacy preservation and execution performance. Eventually, the experimental evaluations are designed to illustrate the efficiency and reliability of UTO and JOM.
In this article, an industrial cyber-physical system (ICPS) is utilized for monitoring critical events such as structural equipment conditions in industrial environments. Such a system can easily be ...a point of attraction for the cyberattackers, in addition to system faults, severe resource constraints (e.g., bandwidth and energy), and environmental problems. This makes data collection in the ICPS untrustworthy, even the data are altered after the data forwarding. Without validating this before data aggregation, detection of an event through the aggregation in the ICPS can be difficult. This article introduces TrustData, a scheme for high-quality data collection for event detection in the ICPS, referred to as "Trust worthy and secured Data collection" scheme. It alleviates authentic data for accumulation at groups of sensor devices in the ICPS. Based on the application requirements, a reduced quantity of data is delivered to an upstream node, say, a cluster head. We consider that these data might have sensitive information, which is vulnerable to being altered before/after transmission. The contribution of this article is threefold. First, we provide the concept of TrustData to verify whether or not the acquired data are trustworthy (unaltered) before transmission, and whether or not the transmitted data are secured (data privacy is preserved) before aggregation. Second, we utilize a general measurement model that helps to verify acquired signal untrustworthy before transmitting toward upstream nodes. Finally, we provide an extensive performance analysis through a real-world dataset, and our results prove the effectiveness of TrustData.
With the development of wireless communication and positioning technology, location-based services (LBSs) have been gaining tremendous popularity, due to its ability to greatly facilitate the ...people's daily lives. Meanwhile, it also entails the risk of location privacy disclosure. To address this issue, general solutions introduce a single trusted anonymizer between the users and the location service provider (LSP). However, a single anonymizer offers limited privacy guarantees and incurs high communication overhead in continuous LBSs. Once the anonymizer is compromised, it may put the user information in jeopardy. In this paper, we propose a dual privacy preserving (DPP) scheme in continuous LBSs to protect the users' trajectory and query privacy. Our scheme introduces multiple anonymizers between the users and LSP, and combines with Shamir threshold mechanism, dynamic pseudonym mechanism, and <inline-formula> <tex-math notation="LaTeX">{K} </tex-math></inline-formula>-anonymity technology to improve the users' trajectory and content privacy in continuous LBSs. An anonymizer alone cannot get the users' trajectory and query contents, and it thus can be semi-trusted. Our scheme can enhance the users' privacy and effectively solve the single point of failure in single anonymizer structure. At the same time, the query authentication can guarantee the correctness of the query results. The analysis and simulation results demonstrate that the proposed scheme has the ability to protect users' trajectory and content privacy effectively, and to reduce the computation and communication overhead of the single anonymizer.
The COVID-19 disease has once again reiterated the impact of pandemics beyond a biomedical event with potential rapid, dramatic, sweeping disruptions to the management, and conduct of everyday life. ...Not only the rate and pattern of contagion that threaten our sense of healthy living but also the safety measures put in place for containing the spread of the virus may require social distancing. Three different measures to counteract this pandemic situation have emerged, namely: (i) vaccination, (ii) herd immunity development, and (iii) lockdown. As the first measure is not ready at this stage and the second measure is largely considered unreasonable on the account of the gigantic number of fatalities, a vast majority of countries have practiced the third option despite having a potentially immense adverse economic impact. To mitigate such an impact, this paper proposes a data-driven dynamic clustering framework for moderating the adverse economic impact of COVID-19 flare-up. Through an intelligent fusion of healthcare and simulated mobility data, we model lockdown as a clustering problem and design a dynamic clustering algorithm for localized lockdown by taking into account the pandemic, economic and mobility aspects. We then validate the proposed algorithms by conducting extensive simulations using the Malaysian context as a case study. The findings signify the promises of dynamic clustering for lockdown coverage reduction, reduced economic loss, and military unit deployment reduction, as well as assess potential impact of uncooperative civilians on the contamination rate. The outcome of this work is anticipated to pave a way for significantly reducing the severe economic impact of the COVID-19 spreading. Moreover, the idea can be exploited for potentially the next waves of corona virus-related diseases and other upcoming viral life-threatening calamities.
Data in cloud has always been a point of attraction for the cyber attackers. Nowadays healthcare data in cloud has become their new interest. Attacks on these healthcare data can result in ...annihilating consequences for the healthcare organizations. Decentralization of these cloud data can minimize the effect of attacks. Storing and running computation on sensitive private healthcare data in cloud are possible by decentralization which is enabled by peer to peer (P2P) network. By leveraging the decentralized or distributed property, blockchain technology ensures the accountability and integrity. Different solutions have been proposed to control the effect of attacks using decentralized approach but these solutions somehow failed to ensure overall privacy of patient centric systems. In this paper, we present a patient centric healthcare data management system using blockchain technology as storage which helps to attain privacy. Cryptographic functions are used to encrypt patient’s data and to ensure pseudonymity. We analyze the data processing procedures and also the cost effectiveness of the smart contracts used in our system.
•User-centric EHR systems giving total control of data to users.•Permissioned Blockchain and other functions restrict intruders from a security breach.•User data are stored in blocks of the permissioned Blockchain.•Elliptic Curve Cryptography (ECC) makes data secure from other party (pseudonimity).
With the development of Internet of Things (IoT) technologies, increasingly many devices are connected, and large amounts of data are produced. By offloading the computing-intensive tasks to the edge ...devices, cloud-based storage technology has become the mainstream. However, if the end IoT devices send all of their data to the cloud, then data privacy becomes a great issue. In this paper, we propose a new architecture for data synchronization based on fog computing. By offloading part of computing and storage work to the fog servers, the data privacy can be guaranteed. Moreover, to decrease the communication cost and reduce the latency, we design a differential synchronization algorithm. Furthermore, we extend the method by introducing Reed-Solomon code for security consideration. We prove that our architecture and algorithm really have better performance than traditional cloud-based solutions in terms of both efficiency and security through a series of experiments.
As an alternative to current wired-based networks, wireless sensor networks (WSNs) are becoming an increasingly compelling platform for engineering structural health monitoring (SHM) due to ...relatively low-cost, easy installation, and so forth. However, there is still an unaddressed challenge: the application-specific dependability in terms of sensor fault detection and tolerance. The dependability is also affected by a reduction on the quality of monitoring when mitigating WSN constrains (e.g., limited energy, narrow bandwidth). We address these by designing a dependable distributed WSN framework for SHM (called DependSHM) and then examining its ability to cope with sensor faults and constraints. We find evidence that faulty sensors can corrupt results of a health event (e.g., damage) in a structural system without being detected. More specifically, we bring attention to an undiscovered yet interesting fact, i.e., the real measured signals introduced by one or more faulty sensors may cause an undamaged location to be identified as damaged (false positive) or a damaged location as undamaged (false negative) diagnosis. This can be caused by faults in sensor bonding, precision degradation, amplification gain, bias, drift, noise, and so forth. In DependSHM, we present a distributed automated algorithm to detect such types of faults, and we offer an online signal reconstruction algorithm to recover from the wrong diagnosis. Through comprehensive simulations and a WSN prototype system implementation, we evaluate the effectiveness of DependSHM.
Recently, artificial intelligence approaches are widely suggested to optimize numerous offloading task-scheduling purposes. However, they confront difficulties in maintaining data privacy regarding ...the context of the data offloading during the course of offloading in the different stages. To address this problem, in this article we propose C-fDRL , a framework to provide context-aware federated deep reinforcement learning (fDRL) to maintain the context-aware privacy of the task offloading. We perform this in three stages (CloudAI, EdgeAI, and DeviceAI) of the overall system. C-fDRL checks whether the privacy of high-context-aware data with the task being offloaded is maintained locally at the DeviceAI, and low-context-aware data distributedly at the EdgeAI. When there is an offloading task request or a user needs to offload the data, C-fDRL uses a context-aware data management approach to decouple the context-aware (privacy) data from the tasks. This separates the context-aware data from the task for local computation and allows a new scheduling technique called "context-aware multilevel scheduler." This places high-context-aware data on local devices and low-context-aware data at the edge device for computation before the actual task execution. We performed experiments to evaluate the data privacy with the offloading tasks and the federated DRL. The results show that the proposed C-fDRL performs better than the existing framework.
Depression is one of the most common mental illnesses, and the symptoms shown by patients are different, making it difficult to diagnose in the process of clinical practice and pathological research. ...Although researchers hope that artificial intelligence can contribute to the diagnosis and treatment of depression, the traditional centralized machine learning methods need to aggregate patient data, and the data privacy of patients with mental illness needs to be strictly confidential, which hinders machine learning algorithms' clinical application. To solve the problem of medical data privacy with depression, in this article, we implement a study of federated learning to analyze and diagnose depression. First, we propose a general multiview federated learning framework using multisource data, which can extend any traditional machine learning model to support federated learning across different institutions or parties. Second, we employ later fusion methods to solve the problem of inconsistent time series of multiview data. Finally, we compare the federated framework with other cooperative learning frameworks in performance and discuss the related results. The experimental results show that in the case of participating in federated learning with enough participants, the prediction accuracy of depression score can reach 85.13%, which is about 15% higher than local training. When the number of participants is small and the amount of data is sufficient, the prediction accuracy of depression score can also reach 84.32%, and the improvement rate is about 9%.