Graph signal processing (GSP) is a field that deals with data residing on irregular domains, i.e. graph signals. In this field, the graph filter bank is one of the most important developments, owing ...to its ability to provide multiresolution analysis of graph signals. However, most of the current research on graph filter bank focuses on static graph signals. The research does not exploit the temporal correlations of time-varying signals in real-world applications, such as in wireless sensor networks. In this paper, the theory and design of joint time-vertex nonsubsampled filter bank are developed, using a generalized product graph framework. Several methods are proposed to design the filter bank with perfect reconstruction, while still achieving filters with good spectral characteristics. A notable feature of the designed filter bank is that it can be completely realized in a distributed manner. The subband filters are either of polynomial type or defined implicitly via iterative equations. In either case, implementing the subband filters requires only the exchange of information between neighboring nodes. The filter banks are therefore of low implementation complexity and suitable for processing large time-varying datasets. Numerical examples will demonstrate the effectiveness of the proposed designed methods. Application in time-varying graph signal denoising will show the superiority of joint time-vertex filter bank over other methods.
The Industrial Internet of Things (IIoT) accommodates a huge number of heterogeneous devices to bring vast services under a distributed computing scenarios. Most productive services in IIoT are ...closely related to production control and require distributed network support with low delay. However, the resource reservation based on gross traffic prediction ignores the importance of productive services and treats them as ordinary services, so it is difficult to provide stable low delay support for large amounts of productive service requests. For many productions, unexpected communication delays are unacceptable, and the delay may lead to serious production accidents causing great losses, especially when the productive service is security related. In this article, we propose a brain-like productive service provisioning scheme with federated learning (BrainIoT) for IIoT. The BrainIoT scheme is composed of three algorithms, including industrial knowledge graph-based relation mining, federated learning-based service prediction, and globally optimized resource reservation. BrainIoT combines production information into network optimization, and utilizes the interfactory and intrafactory relations to enhance the accuracy of service prediction. The globally optimized resource reservation algorithm suitably reserves resources for predicted services considering various resources. The numerical results show that the BrainIoT scheme utilizes interfactory relation and intrafactory relation to make an accurate service prediction, which achieves 96% accuracy, and improves the quality of service.
A role task scheduling method for fog computing system based on behavior flow is proposed. As traditional computing has shift from face data to the edge of the distributed computing, task scheduling ...method by establishing the role behavior flow system, the role of the resource node cooperation behaviors definition and role of organization behavior, and set up from center to edge the role behavior of flow pattern, enables the compute nodes according to their respective role behavior line classification processing. Therefore, it can effectively solve the problems of slow service response, high power consumption and frequent task interruption caused by large data volume and redundant operation in the traditional cloud computing system.
This research presents a procedure and structure for efficient distributed computation and massive data query in online analyzing and processing. The approach utilizes a cluster structure to enable ...distributed pre-computation and query operations on data cubes. The key innovation lies in the partitioning of a large-capacity dataset into multiple blocks distributed across nodes using the MapReduce framework. Each node performs local closed cube computation through Map tasks, and parallel query operations are executed on different nodes to retrieve multiple measuring values from local closed cubes. The measuring values are then merged using Reduce tasks. This procedure offers several advantages, including simplified and effective pre-computation and query processes for large-capacity data in online analysis, reduced storage space requirements for data cubes, and rapid response times for user queries.
Aiming to address the unsatisfactory performance of existing distributed deep learning architectures, such as poor accuracy, slow network communication, low arithmetic speed, and insufficient ...security, we propose and design a learning model based on a distributed deep learning and blockchain architecture. We use a hybrid parallel algorithm based on blockchain (HP‐B) to build a distributed deep consensus learning model. The HP‐B algorithm is grouped according to the performance of computing nodes participating in training, network links and training samples, and the grouped computing equipment performs optimal distributed computing. The purpose of this approach is to solve the security and scalability concerns and improve the convergence speed and accuracy of deep learning. The proposed method achieves good results on the CIFAR‐100, CIFAR‐10, and IMAGENET data sets. Finally, the distributed deep learning model based on blockchain is combined with the generative adversarial network to solve the segmentation problem of medical imaging data, and the experimental results are superior to those of other networks.
In this paper, we consider coded computation for matrix multiplication tasks in distributed computing to mitigate straggler effects. We assume that the stragglers' computation results can be ...leveraged at the master by assigning multiple sub-tasks to the workers. We propose a new coded computation scheme, namely Chebyshev coded fully private matrix multiplication (CFP), to preserve the privacy of a master in a scenario where a master wants to obtain a matrix multiplication result from the libraries which are shared by the workers, while concealing both of the two indices of the desired matrices from each worker. The key idea of CFP is to introduce Chebyshev polynomials, which have commutative property, in queries sent to workers to allocate sub-tasks. We also extend CFP to keep the privacy of a master from colluding workers. In conclusion, we show that CFP can preserve the privacy of a master from each worker and efficiently mitigate straggler effects compared to existing schemes.
Cloud computing is a promising distributed computing platform for big data applications, e.g., scientific applications, since excessive resources can be obtained from cloud services for processing ...and storing both existing and generated application datasets. However, when tasks process big data stored in distributed data centers, the inevitable data movements will cause huge bandwidth cost and execution delay. In this paper, we construct a tripartite graph based model to formulate the data replica placement problem and propose a genetic algorithm based data replica placement strategy for scientific applications to reduce data transmissions in cloud. Our approach can reduce 1) the size of moved data, 2) the time of data movement and 3) the number of movements. We conduct experiments to compare the proposed strategy with the random placement strategy used in Hadoop Distributed Files System (HDFS), which demonstrates that our strategy has better performance for scientific applications in clouds.
Through virtualization and resource integration, cloud computing has expanded its service area and offers a better user experience than the traditional platforms, along with its business operation ...model bringing huge economic and social benefits. However, a large amount of evidence shows that cloud computing is facing with serious security and trust crisis, and building a trust-enabled transaction environment has become its key factor. The traditional cloud trust model usually adopts a centralized architecture, which causes large management overhead, network congestion and even single point of failure. Furthermore, due to a lack of transparency and traceability, trust evaluation results cannot be fully recognized by all participants. Blockchain is a new and promising decentralized framework and distributed computing paradigm. Its unique features in operating rules and traceability of records ensure the integrity, undeniability and security of the transaction data. Therefore, blockchain is very suitable for constructing a distributed and decentralized trust architecture. This paper carries out a comprehensive survey on blockchain-based trust approaches in cloud computing systems. Based on a novel cloud-edge trust management framework and a double-blockchain structure based cloud transaction model, it identifies the open challenges and gives directions for future research in this field.
The Internet of Things (IoT) is defined as interconnected digital and mechanical devices with intelligent and interactive data transmission features over a defined network. The ability of the IoT to ...collect, analyze and mine data into information and knowledge motivates the integration of IoT with grid and cloud computing. New job scheduling techniques are crucial for the effective integration and management of IoT with grid computing as they provide optimal computational solutions. The computational grid is a modern technology that enables distributed computing to take advantage of a organization's resources in order to handle complex computational problems. However, the scheduling process is considered an NP-hard problem due to the heterogeneity of resources and management systems in the IoT grid. This paper proposed a Greedy Firefly Algorithm (GFA) for jobs scheduling in the grid environment. In the proposed greedy firefly algorithm, a greedy method is utilized as a local search mechanism to enhance the rate of convergence and efficiency of schedules produced by the standard firefly algorithm. Several experiments were conducted using the GridSim toolkit to evaluate the proposed greedy firefly algorithm's performance. The study measured several sizes of real grid computing workload traces, starting with lightweight traces with only 500 jobs, then typical with 3000 to 7000 jobs, and finally heavy load containing 8000 to 10,000 jobs. The experiment results revealed that the greedy firefly algorithm could insignificantly reduce the makespan makespan and execution times of the IoT grid scheduling process as compared to other evaluated scheduling methods. Furthermore, the proposed greedy firefly algorithm converges on large search spacefaster , making it suitable for large-scale IoT grid environments.
The Sixth Generation network (6G) can support autonomous driving along with various vehicular applications like Vehicular Edge Computing (VEC), a distributed computing architecture for connected ...autonomous vehicles. Computational offloading and resource management of Vehicular Edge Computing can help sort out some issues, such as high communication costs, privacy protection, an excessively long training process, etc., by proposing an efficient training model of the Federated Learning for computational offloading and resource management in a vehicular environment. Two research issues are highlighted in this paper. One problem is related to the current offloading system: the smart structure and operating system. Consistent access to cloud computing services, regardless of the installed operating system or used hardware, is still challenging. Another issue is related to security and privacy. Security and privacy are two important features that should be maintained in cloud data centers and data transmission during offloading and resource management. In this survey paper, a system is going to be proposed which will give a partial solution for these issues. The proposed solution, which is found while conducting this review, offers a system that can train a model and help update the edge devices' information. The entire edge cloud system can provide updated information for edge devices and can solve the difficulties of getting some key information necessary for model-related optimization. This also can enhance the effectiveness of the frameworks of the 6G-V2X network for communication.