With the advancements of the Health 2.0 technology, large-scale healthcare services are available online. Recommender systems for healthcare services have emerged for decision assistance. Most ...existing collaborative recommendation algorithms only mine global interactions while failing to capture the local different information of users or items. Besides, privacy concern is another significant problem to be considered in recommender systems for healthcare services. In this article, a privacy-aware factorization-based hybrid method is proposed for healthcare service recommendations. For better modeling of user preferences and service features, multiview embeddings of users and healthcare services are learned. Besides, we address the privacy problem by integrating local differential privacy and locality-sensitive hashing techniques into the recommendation model for privacy-aware neighbor searching. The final prediction is made based on a stochastic gradient descent learning-based hybrid collaborative model. Experiments demonstrate the effectiveness of the proposed method in both recommendation performance and privacy concerns.
For the cloud computing based on software-defined networks (SDNs), a larger amount of data is collected to cloud for analysis, which will cause the larger amount of redundancy data and longer service ...response time due to the capacity-limited Internet. To solve this problem, a novel service orchestration and data aggregation framework (SODA) is proposed, which can orchestrate data as services and aggregate data packets to reduce data redundancy and service response delay. In SODA, the network is divided into three layers. 1) Data centers layer (DCL). Data centers (DCs) release software with a specific function to all devices in the network, devices orchestrate data as services and aggregate data packets using software to reduce service response delay. 2) Middle routing layer (MRL). The routing path of data packets in this layer is adjusted according to the correlation of data packets and routing distance. The correlation of data packets is higher and routing distance is short, the probability that data packets are transmitted along the same routing path is higher to reduce redundancy data. 3) Vehicle network layer (VNL). Mobile vehicles are used to transmit data packets and services among devices. A series of experiments and simulation is conducted. The results illustrate that the proposed scheme has better performance compared with the traditional scheme.
Recommender systems can help correlate information and recommend personalised services to users as a general information filtering tool. However, contextual factors significantly affect user ...behaviour, especially in the Internet of Things (IoT), which brings difficulties to modelling user preferences. In this paper, we propose a personalised context-aware re-ranking algorithm (p-CAR) in IoT. Our primary purpose is to improve the recommender performance from multiple metrics, such as precision, recall, diversity, and popularity. The core idea is to re-rank the ranking list using the user's preference behaviour under different contexts. The re-ranking process is an iterative selection process; each time an optimal item that meets the target criteria is selected from the candidate items and added to the re-ranked list. The selection of items depends on the given context and the user's interest in that context. User's preference and interest in contexts are both expressed by probability in our algorithm. In addition, we use a weight parameter to control the influence of contexts and model the contextual personalisation of different users through local personalisation parameters. We verify our algorithm through experiments on the real Movielens 100K dataset and show the performance advantage with the existing algorithm.
In a cloud computing environment, it is not easy to schedule various Internet of Things (IoT) application tasks due to the heterogeneity characterises of IoT. Efficient scheduling and load balancing ...of IoT applications is important to minimize the total execution time(makespan) while adhering to constraints like task dependencies. In this paper, a cognitive or intelligent model of bio-inspired approach is used to find the optimal solution of task scheduling for IoT applications in a heterogeneous multiprocessor cloud environment. Natural selection of genes and evolutionary foraging traits has proved that only the fittest species survive in nature. In this case, a fit schedule is considered as one which is efficient and follows the task ordering in the multiprocessor environment. A hybrid algorithm GAACO combining Genetic Algorithm (GA) and Ant Colony Optimization (ACO) has been used to select only the best combination of tasks at each stage. This unique combination of GA and ACO used ensures the appropriate convergence and optimality when GAACO is developed. Scheduling using GAACO is not pre-emptive and it is assumed that one task can be assigned to one processor. When tested on various sizes of task graphs and different number of processors, GAACO has proved to be competent with the conventional approaches of using GA and ACO in the heterogeneous multiprocessor environment.
•An intelligent/cognitive Iof Task Scheduling model is proposed in cloud computing environment.•A combination of Genetic Algorithm and Ant Colony Optimization has been used for task scheduling.•Promising results have been achieved as compared to the state of the art approaches.
Artificial intelligence has achieved great success in the field of medical-assisted diagnosis, and a deep learning technology plays a very important role in medical image recognition. However, it ...usually takes medical institutions extra time, energy, and cost to obtain a credible and efficient deep learning model, which is not conducive to a wide range of applications, including medical image recognition and medical decision making. In this article, we propose a novel medical-assisted diagnosis model as a service (MDMaaS). Medical institutions can obtain and use the medical-assisted diagnosis models from the service providers directly; a model training and a model application in machine learning are assigned to a service provider and a consumer, respectively. We have designed a model acquisition method based on the conventional samples and small samples for MDMaaS providers, and we have also developed a trustworthy model-based recommendation method for MDMaaS consumers, which would help the medical institutions to obtain the reliable medical-assisted diagnosis models quickly and efficiently. Based on the MDMaaS, extensive experiments are performed to verify the effectiveness of the proposed method.
•A novel zero shot learning method for health signal processing through data augmentation.•The method includes three parts, and a detailed workflow is designed for each part.•The method does not rely ...on the attribute data set.•The method explores the reasons why the classifier makes the decisions and analyzes the explanations.
In recent years, the number of Internet of Things (IoT) devices has increased rapidly. The Internet of Biometric Things (IoBT) can process biometrics and health signals, and it will greatly extend the range of biometric applications. The analysis of health signals in the IoBT can use computer-aided diagnosis techniques. However, most of the existing computer-aided diagnosis methods are developed for common diseases and are not suitable for rare diseases. Zero shot learning is a potential method for the computer-aided diagnosis of rare diseases because it can identify objects of unknown categories. However, the existing zero shot learning methods are based on attribute learning and rely on an attribute dataset. There is no attribute dataset for health signal processing. Therefore, the existing zero shot learning methods are not suitable for health signal processing. Based on the above background, we propose a zero shot augmentation learning model (ZSAL) in the IoBT for health signal processing. First, an expert doctor identifies the contour of a lesion and selects a background image without a lesion. Second, the computer automatically generates virtual images using zero shot augmentation technology. Finally, the generated virtual dataset is used to train a convolutional classifier, and then we apply the classifier to the computer-aided diagnosis of actual medical images. The experiment shows the efficiency and effectiveness of our method.
Sharing encrypted data with different users via public cloud storage is an important functionality. Therefore, we propose a key-aggregate authentication cryptosystem that can generate a constant-size ...key that supports flexible delegation of decryption rights for any set of ciphertexts. The size of the key is independent of the number of maximum ciphertexts, meaning that the expense of our scheme is stable no matter how frequently users upload files to the cloud server. In addition, the authentication process in our scheme solves the key-leakage problem of data sharing. The data owner can extract an aggregated key that includes indices of the ciphertexts, the identity of the delegate, and the expiration date of the key. The key with the public parameters is used by the cloud server to identity the person or entity requesting a download, allowing the cloud server to control the right to download. Remarkably, we proved that the authentication key cannot be forged, and the message in this key cannot be denied. The method that is used to achieve efficient and secure data sharing in dynamic cloud storage must be stable in expense and leakage-resilient. Our scheme simultaneously satisfies both of these requirements.
•We presented a key-aggregate cryptosystem in which the number of keys was constant.•The number of aggregated keys did not rely on the number of file classes.•We set up an authentication process that is executed by the cloud server.
Cross-border data sharing for knowledge generation is a challenging research direction since an application may access personal data stored in countries different from the one where the application ...is accessed from. In this article, we propose a cross-border data sharing platform where a global cloud is built atop multiple security gateways that are set up in different countries. Once an application requests access to data from a particular country or region, the global cloud collects the data stored in local data hubs through that region's security gateway. While transferring the data to the global cloud, the security gateway records this transfer information on a blockchain maintained by the global cloud. When an application reports any misbehavior (e.g., providing wrong data type or incorrect data) against a security gateway, the global cloud verifies the claim by auditing the blockchain and punishes the misbehaving security gateway if the claim is true. In the case of false misbehavior report, the application itself will be punished by the global cloud. Thus, our platform provides an accountable data sharing function using blockchain that relies on a relaxed trust assumption on the data providers. We include five algorithms to handle data access request, data sharing, blockchain transaction, detecting, and punishing misbehaving entities. In the algorithms, we also introduce how the transaction takes place in the platform. Thus, the proposed platform is able to handle misbehaving data sender, data receiver, or any entity participating in the platform. We analyze our platform empirically by showing different graphs, which have been generated by a number of experiments on blockchain environment. We also delineate how the multilayer of signature (Elliptic Curve Digital Signature Algorithm) acts in our platform.
The deployment of large-scale wireless sensor networks (WSNs) for the Internet of Things (IoT) applications is increasing day-by-day, especially with the emergence of smart city services. The sensor ...data streams generated from these applications are largely dynamic, heterogeneous, and often geographically distributed over large areas. For high-value use in business, industry and services, these data streams must be mined to extract insightful knowledge, such as about monitoring (e.g., discovering certain behaviors over a deployed area) or network diagnostics (e.g., predicting faulty sensor nodes). However, due to the inherent constraints of sensor networks and application requirements, traditional data mining techniques cannot be directly used to mine IoT data streams efficiently and accurately in real-time. In the last decade, a number of works have been reported in the literature proposing behavioral pattern mining algorithms for sensor networks. This paper presents the technical challenges that need to be considered for mining sensor data. It then provides a thorough review of the mining techniques proposed in the recent literature to mine behavioral patterns from sensor data in IoT, and their characteristics and differences are highlighted and compared. We also propose a behavioral pattern mining framework for IoT and discuss possible future research directions in this area.
Microblog platforms have been extremely popular in the big data era due to its real-time diffusion of information. It's important to know what anomalous events are trending on the social network and ...be able to monitor their evolution and find related anomalies. In this paper we demonstrate RING, a real-time emerging anomaly monitoring system over microblog text streams. RING integrates our efforts on both emerging anomaly monitoring research and system research. From the anomaly monitoring perspective, RING proposes a graph analytic approach such that (1) RING is able to detect emerging anomalies at an earlier stage compared to the existing methods, (2) RING is among the first to discover emerging anomalies correlations in a streaming fashion, (3) RING is able to monitor anomaly evolutions in real-time at different time scales from minutes to months. From the system research perspective, RING (1) optimizes time-ranged keyword query performance of a full-text search engine to improve the efficiency of monitoring anomaly evolution, (2) improves the dynamic graph processing performance of Spark and implements our graph stream model on it, As a result, RING is able to process big data to the entire Weibo or Twitter text stream with linear horizontal scalability. The system clearly presents its advantages over existing systems and methods from both the event monitoring perspective and the system perspective for the emerging event monitoring task.