In this paper we develop a tractable framework for SINR analysis in downlink heterogeneous cellular networks (HCNs) with flexible cell association policies. The HCN is modeled as a multi-tier ...cellular network where each tier's base stations (BSs) are randomly located and have a particular transmit power, path loss exponent, spatial density, and bias towards admitting mobile users. For example, as compared to macrocells, picocells would usually have lower transmit power, higher path loss exponent (lower antennas), higher spatial density (many picocells per macrocell), and a positive bias so that macrocell users are actively encouraged to use the more lightly loaded picocells. In the present paper we implicitly assume all base stations have full queues; future work should relax this. For this model, we derive the outage probability of a typical user in the whole network or a certain tier, which is equivalently the downlink SINR cumulative distribution function. The results are accurate for all SINRs, and their expressions admit quite simple closed-forms in some plausible special cases. We also derive the average ergodic rate of the typical user, and the minimum average user throughput - the smallest value among the average user throughputs supported by one cell in each tier. We observe that neither the number of BSs or tiers changes the outage probability or average ergodic rate in an interference-limited full-loaded HCN with unbiased cell association (no biasing), and observe how biasing alters the various metrics.
Although various linear log-distance path loss models have been developed for wireless sensor networks, advanced models are required to more accurately and flexibly represent the path loss for ...complex environments. This paper proposes a machine learning framework for modeling path loss using a combination of three key techniques: artificial neural network (ANN)-based multi-dimensional regression, Gaussian process-based variance analysis, and principle component analysis (PCA)-aided feature selection. In general, the measured path loss dataset comprises multiple features such as distance, antenna height, etc. First, PCA is adopted to reduce the number of features of the dataset and simplify the learning model accordingly. ANN then learns the path loss structure from the dataset with reduced dimension, and Gaussian process learns the shadowing effect. Path loss data measured in a suburban area in Korea are employed. We observe that the proposed combined path loss and shadowing model is more accurate and flexible compared to the conventional linear path loss plus log-normal shadowing model.
This paper proposes two interference mitigation strategies that adjust the maximum transmit power of femtocell users to suppress the cross-tier interference at a macrocell base station (BS). The ...open-loop and the closed-loop control suppress the cross-tier interference less than a fixed threshold and an adaptive threshold based on the noise and interference (NI) level at the macrocell BS, respectively. Simulation results show that both schemes effectively compensate the uplink throughput degradation of the macrocell BS due to the cross-tier interference and that the closed-loop control provides better femtocell throughput than the open-loop control at a minimal cost of macrocell throughput.
Non-terrestrial network (NTN) services using low-Earth-orbit (LEO) satellites are expanding. Interference management of NTN services with other terrestrial wireless services is emerging as a critical ...issue owing to the inherent international and vast coverage nature of NTN. This study develops a multi-agent deep reinforcement learning (DRL) framework to establish a multi-beam uplink channel allocation strategy that minimizes interference with incumbent stations under the given quality of service (QoS) constraints. We propose a novel framework with the sequential training of agents in a specific order to overcome the inherent non-stationarity of multi-agent DRL. To improve learning efficiency, we design the training sequence in accordance with reward function and initial state. As a result, taking actions in the order of the largest interference to the incumbent station provides superior performance to taking actions in an arbitrary order. Moreover, the proposed channel allocation performs close to the optimal exhaustive search and outperforms conventional greedy graph coloring method.
The 3rd Generation Partnership Project (3GPP) narrowband Internet of Things (NB-IoT) over non-terrestrial networks (NTN) is the most promising candidate technology supporting 5G massive machine-type ...communication. Compared to geostationary earth orbit, low earth orbit (LEO) satellite communication has the advantage of low propagation loss, but suffers from high Doppler shift. The 3GPP proposes Doppler shift pre-compensation for each beam region of the satellite. However, user equipment farther from the beam center has significant residual Doppler shifts even after pre-compensation, which degrades link performance. This study proposes residual Doppler shift compensation by adding demodulation reference signal symbols and reducing satellite beam coverage. The block error rate (BLER) data are obtained using link-level simulation with the proposed technique. Since the communication time provided by a single LEO satellite moving fast is short, many LEO satellites are necessary for seamless 24-h communication. Therefore, with the BLER data, we analyze the link budget for actual three-dimensional orbits with a maximum of 162 LEO satellites. We finally investigate the effect of the proposed technique on performance metrics such as the per-day total service time and maximum persistent service time, considering the number of satellites and the satellite spacing. The results show that a more prolonged and continuous communication service is possible with significantly fewer satellites using the proposed technique.
In this paper, we study spectral coexistence between rotating radar and power-controlled cellular networks in radar bands. For two systems to spectrally coexist, they must be able to operate ...effectively without causing harmful electromagnetic interference to each other. Very short radar-cellular system separation distances are required during 83.3% of time due to the narrow main beam width of the rotational radar antenna. We propose a spatio-temporal analytical approach with adaptive base station (BS) power control for adjacent spectrum sharing between the two systems. We develop a new model for the aggregate interference from power-controlled cellular BSs using log-normal approximation. The cellular system is allowed to transmit at high power when the radar antenna's main beam is pointing elsewhere from it. On the other hand, the cellular system reduces its transmit power only for a short period when the radar directional antenna main beam is pointing toward it. We use the degradation of the radar signal-to-interference plus noise ratio and cellular outage probability as our performance metrics. Numerical results show that power control of cellular BS highly reduces separation distance between the BS and radar, while yielding marginal degradation of outage performance. In addition, the mathematical results given by the log-normal approximation closely follow our simulated results. Detail system level assessments and investigations are presented to comprehensively understand secondary access to this band opportunistically.
A High Altitude Platform Station (HAPS) can facilitate high-speed data communication over wide areas using high-power line-of-sight communication; however, it can significantly interfere with ...existing systems. Given spectrum sharing with existing systems, the HAPS transmission power must be adjusted to satisfy the interference requirement for incumbent protection. However, excessive transmission power reduction can lead to severe degradation of the HAPS coverage. To solve this problem, we propose a multi-agent Deep Q-learning (DQL)-based transmission power control algorithm to minimize the outage probability of the HAPS downlink while satisfying the interference requirement of an interfered system. In addition, a double DQL (DDQL) is developed to prevent the potential risk of action-value overestimation from the DQL. With a proper state, reward, and training process, all agents cooperatively learn a power control policy for achieving a near-optimal solution. The proposed DQL power control algorithm performs equal or close to the optimal exhaustive search algorithm for varying positions of the interfered system. The proposed DQL and DDQL power control yields the same performance, which indicates that the actional value overestimation does not adversely affect the quality of the learned policy.
The integration of unmanned aerial vehicles (UAVs) into spectrum sensing cognitive communication networks can offer many benefits for massive connectivity services in 5G communications and beyond; ...hence, this work analyses the performance of non-orthogonal multiple access-based cognitive UAV-assisted ultra-reliable and low-latency communications (URLLCs) and massive machine-type communication (mMTC) services. An mMTC service requires better energy efficiency and connection probability, whereas a URLLC service requires minimising the latency. In particular, a cognitive UAV operates as an aerial secondary transmitter to a ground base station by sharing the unlicensed wireless spectrum. To address these issues, we derive the analytical expressions of throughput, energy efficiency, and latency for mMTC/URLLC-UAV device. We also formulate an optimisation problem of energy efficiency maximisation to satisfy the needs of URLLC latency and mMTC throughput and solve it using the Lagrangian method and the Karush-Kuhn-Tucker conditions. The algorithm is presented by jointly optimising the transmission powers of the mMTC and URLLC users. The derived expressions and algorithm are then used to evaluate the performance of the proposed system model. The numerical results show that the proposed algorithm improves the energy efficiency and satisfies the latency requirement of the mMTC/URLLC-UAV device.
This paper studies the spectral/energy efficiency (SE/EE) of a heterogeneous network with the backhaul enabled by low-resolution analog-to-digital converters (ADCs) quantized full-duplex massive ...multiple-input multiple-output (MIMO) over Rician channels. Backhaul communication is completed over two phases. During the first phase, the macro-cell (MC) base station (BS) deploys massive receive antennas and a few transmit antennas; the small-cell (SC) BSs employ large-scale receive antennas and a single transmit antenna. For the second phase, the roles of the transmit and receive antennas are switched. Due to the low-resolution ADCs, we account for quantization noise (QN). We characterize the joint impact of the number of antennas, self-interference, SC-to-SC interference, QN, and Rician <inline-formula> <tex-math notation="LaTeX">K </tex-math></inline-formula>-factor. For the first phase, the SE is enhanced with the massive receive antennas and the loss due to QN is limited. For the second phase, the desired signal and QN have the same order. Therefore, the SE saturates with the massive transmit antennas. As the Rician <inline-formula> <tex-math notation="LaTeX">K </tex-math></inline-formula>-factor increases, the SE converges. Power scaling laws are derived to demonstrate that the transmit power can be scaled down proportionally to the massive antennas. We investigate the EE/SE trade-offs. The envelope of the EE/SE region grows with increase in the Rician <inline-formula> <tex-math notation="LaTeX">K </tex-math></inline-formula>-factor.
In the Industrial Internet of Things (IIoT), energy efficiency is a paramount concern as it directly affects operational longevity. Traditional approaches, like flooding for time synchronization, ...often result in redundant message transmissions, thereby wasting energy. This article introduces an intelligent neighbor-knowledge synchronization (INKS) method to mitigate this problem. The INKS algorithm leverages each node's understanding of its neighboring nodes to optimize the synchronization process, thereby reducing the total number of synchronization messages and conserving energy. The INKS is implemented and evaluated using real wireless sensor networks with varying configurations. The experimental results demonstrate their superior performance to existing techniques, such as rapid flooding multiple one-way broadcast time synchronization (RMTS). Additionally, the performance of INKS is evaluated through simulations conducted on large-scale networks. For the network topology of the four-way grid, the findings reveal that INKS reduces the number of transmitted messages by approximately 72% compared to RMTS. Moreover, INKS matches the efficiency of scheduling-based low-energy synchronization for IIoT.