As an emerging decentralized secure data management platform, blockchain has gained much popularity recently. To maintain a canonical state of blockchain data record, proof-of-work based consensus ...protocols provide the nodes, referred to as miners, in the network with incentives for confirming new block of transactions through a process of “block mining” by solving a cryptographic puzzle. Under the circumstance of limited local computing resources, e.g., mobile devices, it is natural for rational miners, i.e., consensus nodes, to offload computational tasks for proof of work to the cloud/fog computing servers. Therefore, we focus on the trading between the cloud/fog computing service provider and miners, and propose an auction-based market model for efficient computing resource allocation. In particular, we consider a proof-of-work based blockchain network, which is constrained by the computing resource and deployed as an infrastructure for decentralized data management applications. Due to the competition among miners in the blockchain network, the allocative externalities are particularly taken into account when designing the auction mechanisms. Specifically, we consider two bidding schemes: the constant-demand scheme where each miner bids for a fixed quantity of resources, and the multi-demand scheme where the miners can submit their preferable demands and bids. For the constant-demand bidding scheme, we propose an auction mechanism that achieves optimal social welfare. In the multi-demand bidding scheme, the social welfare maximization problem is NP-hard. Therefore, we design an approximate algorithm which guarantees the truthfulness, individual rationality and computational efficiency. Through extensive simulations, we show that our proposed auction mechanisms with the two bidding schemes can efficiently maximize the social welfare of the blockchain network and provide effective strategies for the cloud/fog computing service provider.
Blockchain, an emerging decentralized security system, has been applied in many applications, such as bitcoin, smart grid, and Internet-of-Things. However, running the mining process may cost too ...much energy consumption and computing resource usage on handheld devices, which restricts the use of blockchain in mobile environments. In this paper, we consider deploying edge computing service to support the mobile blockchain. We propose an auction-based edge computing resource allocation mechanism for the edge computing service provider. Since there is competition among miners, the allocative externalities are taken into account in the model. In our auction mechanism, we maximize the social welfare while guaranteeing the truthfulness, individual rationality and computational efficiency. Through extensive simulations, we evaluate the performance of our auction mechanism which shows that the proposed mechanism can efficiently solve the social welfare maximization problem for the edge computing service provider.
Spectrum inference, also known as spectrum prediction in the literature, is a promising technique of inferring the occupied/free state of radio spectrum from already known/measured spectrum occupancy ...statistics by effectively exploiting the inherent correlations among them. In the past few years, spectrum inference has gained increasing attention owing to its wide applications in cognitive radio networks (CRNs), ranging from adaptive spectrum sensing, and predictive spectrum mobility, to dynamic spectrum access and smart topology control, to name just a few. In this paper, we provide a comprehensive survey and tutorial on the recent advances in spectrum inference. Specifically, we first present the preliminaries of spectrum inference, including the sources of spectrum occupancy statistics, the models of spectrum usage, and characterize the predictability of spectrum state evolution. By introducing the taxonomy of spectrum inference from a time-frequency-space perspective, we offer an in-depth tutorial on the existing algorithms. Furthermore, we provide a comparative analysis of various spectrum inference algorithms and discuss the metrics of evaluating the efficiency of spectrum inference. We also portray the various potential applications of spectrum inference in CRNs and beyond, with an outlook to the fifth-generation mobile communications and next generation high frequency communications systems. Last but not least, we highlight the critical research challenges and open issues ahead.
One of the primary challenges in wireless blockchain networks is to ensure security and high throughput with constrained communication and energy resources. In this paper, with curve fitting on the ...collected blockchain performance dataset, we explore the impact of the data transmission rate configuration on the wireless blockchain system under different network topologies, and give the blockchain a utility function which balances the throughput, energy efficiency, and stale rate. For efficient blockchain network deployment, we propose a novel Graph Convolutional Neural Network (GCN)-based approach to quickly and accurately determine the optimal data transmission rate. The experimental results demonstrate that the average relative deviation between the blockchain utility obtained by our GCN-based method and the optimal utility is less than 0.21%.
In spectrum sharing systems, locating multiple radiation sources can efficiently find out the intruders, which protects the shared spectrum from malicious jamming or other unauthorized usage. ...Compared to single-source localization, simultaneously locating multiple sources is more challenging in practice since the association between measurement parameters and source nodes are not known. Moreover, the number of possible measurements-source associations increases exponentially with the number of sensor nodes. It is crucial to discriminate which measurements correspond to the same source before localization. In this work, we propose a centralized localization scheme to estimate the positions of multiple sources. Firstly, we develop two computationally light methods to handle the unknown RSS-AOA measurements-source association problem. One method utilizes linear coordinate conversion to compute the minimum spatial Euclidean distance summation of measurements. Another method exploits the long-short-term memory (LSTM) network to classify the measurement sequences. Then, we propose a weighted least squares (WLS) approach to obtain the closed-form estimation of the positions by linearizing the non-convex localization problem. Numerical results demonstrate that the proposed scheme could gain sufficient localization accuracy under adversarial scenarios where the sources are in close proximity and the measurement noise is strong.
This paper investigates the problem of data scarcity in spectrum prediction. A cognitive radio equipment may frequently switch the target frequency as the electromagnetic environment changes. The ...previously trained model for prediction often cannot maintain a good performance when facing small amount of historical data of the new target frequency. Moreover, the cognitive radio equipment usually implements the dynamic spectrum access in real time which means the time to recollect the data of the new task frequency band and retrain the model is very limited. To address the above issues, we develop a cross-band data augmentation framework for spectrum prediction by leveraging the recent advances of generative adversarial network (GAN) and deep transfer learning. Firstly, through the similarity measurement, we pre-train a GAN model using the historical data of the frequency band that is the most similar to the target frequency band. Then, through the data augmentation by feeding the small amount of the target data into the pre-trained GAN, temporal-spectral residual network is further trained using deep transfer learning and the generated data with high similarity from GAN. Finally, experiment results demonstrate the effectiveness of the proposed framework.
A big data service is any data-originated resource that is offered over the Internet. The performance of a big data service depends on the data bought from the data collectors. However, the problem ...of optimal pricing and data allocation in big data services is not well-studied. In this paper, we propose an auction-based big data market model. We first define the data cost and utility based on the impact of data size on the performance of big data analytics, e.g., machine learning algorithms. The big data services are considered as digital goods and uniquely characterized with ''unlimited supply'' compared to conventional goods which are limited. We therefore propose a Bayesian profit maximization auction which is truthful, rational, and computationally efficient. The optimal service price and data size are obtained by solving the profit maximization auction. Finally, experimental results on a real-world taxi trip dataset show that our big data market model and auction mechanism effectively solve the profit maximization problem of the service provider.
In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for ...meaningful applications, e.g., for medical purposes and in vehicular networks. Traditional cloud-based Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL.
High-frequency (HF) communication plays a vital role in military and commercial fields because of its characteristics of transhorizon propagation. Accurate prediction of the maximum usable frequency ...(MUF) is critical to improve the efficiency of HF ionospheric propagation. Deep learning has demonstrated remarkable performance in time-series prediction in recent years. However, the MUF is a nonstationary time series with complex properties of multiple time scales. Therefore, we must fully explore the features of the MUF sequence to provide reliable support for selecting deep-learning prediction models. In this article, we first decompose the real-world MUF-measured data from vertical sounding stations to obtain different components. After analyzing the phenomena of each component at different positions, we propose the capability requirements of the MUF prediction model and verify the capability requirements by comparing the existing most advanced deep-learning prediction models. Then, we calculate the complexity of the MUF sequence based on multiscale entropy. Through the entropy series at different time scales, the key to improving the model's prediction performance is the fitting ability of the nonlinear change of the long-term trend component in the short-period scale. Furthermore, we propose a quantitative prediction scheme of MUF based on multiscale entropy similarity tolerance. Based on the Fano inequality, we calculate and compare the upper and lower bounds of the predictability on the quantization sequence and the original data and verify the effectiveness of the proposed scheme. Through experimental results, as the latitude decreases, the models based on time-series decomposition have better prediction performance over the state-of-the-art schemes. Compared to nontime-series decomposition deep-learning models, the root mean square error (RMSE) of the predicted MUF values for four cities, ranging from low to high latitude, decreased by an average of 0.11, 0.20, 0.02, and 0.03 MHz, respectively.
Photoelectrochemical (PEC) sensors have emerged as a promising candidate for biochemical detection. However, PEC sensors based on bulk semiconductors are limited by sensitivity and detection ...varieties, though they can provide uniform imaging; PEC sensors based on nanomaterials can meet the requirements of various analytes and sensitivity, but the inhomogeneity of their surface poses a challenge to imaging uniformity. In this work, metal-assisted chemical etching (MACE) and sputtering processes were used to directly fabricated PEC sensors with fireworks-like silicon nanowires (F-SiNWs) and their heterojunctions with excellent performance. Through regulating the etching and sputtering times, the structure and nanomaterials attachment of F-SiNWs can be well controlled. The photoelectric and sensing properties of F-SiNWs with different morphologies were investigated and optimized. Then, the sensor fabricated by optimal F-SiNWs can directly detect H2O2 and indirectly detect glucose under negative bias, while directly detect dopamine (DA) under positive bias. The detection of these three analytes do not interfere with each other, contributing to an excellent multichannel detection system. Subsequently, the multichannel detection of various secretions (H2O2, DA and glucose) of PC-12 cells was achieved by the sensor. Finally, the imaging uniformity of the sensor was demonstrated in different solutions. PEC bioimaging was successfully applied to monitor enzyme activity by bioimaging gradients and cell survival by bioimaging distribution. These results demonstrate the high sensitivity in multichannel detection and excellent stability and homogeneity in bioimaging of the sensor. Therefore, MACE and sputtering process can be a promising tool for constructing PEC sensor.
•MACE and sputtering processes were used for fabricating highly sensitive PEC sensor.•The structure of F-SiNWs/Ag was optimized by the etching and sputtering time.•The mechanism of MACE and sputtering processes enhancing PEC performance was studied.•Multichannel detection of H2O2, DA and glucose secreted by PC-12 cells was achieved.•PEC sensor achieved uniform bioimaging to study enzyme activity and cell survival.