The futuristic sixth-generation (6G) networks will empower ultra-reliable and low latency communications (URLLC), enabling a wide array of mission-critical applications such as mobile edge computing ...(MEC) systems, which are largely unsupported by fixed communication infrastructure. To remedy this issue, unmanned aerial vehicle (UAV) has recently come to the limelight to facilitate MEC for internet of things (IoT) devices as they provide desirable line-of-sight (LoS) communications compared to fixed terrestrial networks, thanks to their added flexibility and three-dimensional (3D) positioning. In this paper, we consider UAV-enabled relaying for MEC systems for uplink transmissions in 6G networks, and we aim to optimize mission completion time subject to the constraints of resource allocation, including UAV transmit power, UAV CPU frequency, decoding error rate, blocklength, communication bandwidth, and task partitioning as well as 3D UAV positioning. Moreover, to solve the non-convex optimization problem, we propose three different algorithms, including successive convex approximations (SCA), altered genetic algorithm (AGA) and smart exhaustive search (SES). Thereafter, based on time-complexity, execution time, and convergence analysis, we select AGA to solve the given optimization problem. Simulation results demonstrate that the proposed algorithm can successfully minimize the mission completion time, perform power allocation at the UAV side to mitigate information leakage and eavesdropping as well as map a 3D UAV positioning, yielding better results compared to the fixed benchmark sub-methods. Lastly, subject to 3D UAV positioning, AGA can also effectively reduce the decoding error rate for supporting URLLC services.
Nonlinear equation systems are ubiquitous in a variety of fields, and how to tackle them has drawn much attention, especially dynamic ones. As a particular class of recurrent neural network, the ...zeroing neural network (ZNN) takes time-derivative information into consideration, and thus, is a competent approach to dealing with dynamic problems. Hitherto, two kinds of ZNN models have been developed for solving systems of dynamic nonlinear equations. One of them is explicit, involving the computation of a pseudoinverse matrix, and the other is of implicit dynamics essentially. To address these two issues at once, a low-computational-complexity ZNN (LCCZNN) model is proposed. It does not need to compute any pseudoinverse matrix, and is in the form of explicit dynamics. In addition, a novel activation function is presented to endow the LCCZNN model with finite-time convergence and certain robustness, which is proved rigorously by Lyapunov theory. Numerical experiments are conducted to validate the results of theoretical analyses, including the competence and robustness of the LCCZNN model. Finally, a pseudoinverse-free controller derived from the LCCZNN model is designed for a UR5 manipulator to online accomplish a trajectory-following task.
Gaussian processes (GPs) are commonly used as models for functions, time series, and spatial fields, but they are computationally infeasible for large datasets. Focusing on the typical setting of ...modeling data as a GP plus an additive noise term, we propose a generalization of the Vecchia (J. Roy. Statist. Soc. Ser. B50 (1988) 297–312) approach as a framework for GP approximations. We show that our general Vecchia approach contains many popular existing GP approximations as special cases, allowing for comparisons among the different methods within a unified framework. Representing the models by directed acyclic graphs, we determine the sparsity of the matrices necessary for inference, which leads to new insights regarding the computational properties. Based on these results, we propose a novel sparse general Vecchia approximation, which ensures computational feasibility for large spatial datasets but can lead to considerable improvements in approximation accuracy over Vecchia's original approach. We provide several theoretical results and conduct numerical comparisons. We conclude with guidelines for the use of Vecchia approximations in spatial statistics.
Sampling can be faster than optimization Ma, Yi-An; Chen, Yuansi; Jin, Chi ...
Proceedings of the National Academy of Sciences - PNAS,
10/2019, Letnik:
116, Številka:
42
Journal Article
Recenzirano
Odprti dostop
Optimization algorithms and Monte Carlo sampling algorithms have provided the computational foundations for the rapid growth in applications of statistical machine learning in recent years. There is, ...however, limited theoretical understanding of the relationships between these 2 kinds of methodology, and limited understanding of relative strengths and weaknesses. Moreover, existing results have been obtained primarily in the setting of convex functions (for optimization) and log-concave functions (for sampling). In this setting, where local properties determine global properties, optimization algorithms are unsurprisingly more efficient computationally than sampling algorithms. We instead examine a class of nonconvex objective functions that arise in mixture modeling and multistable systems. In this nonconvex setting, we find that the computational complexity of sampling algorithms scales linearly with the model dimension while that of optimization algorithms scales exponentially.
In this brief, an affine-projection-like M-estimate (APLM) algorithm is proposed for robust adaptive filtering. To eliminate the adverse effects of impulsive noise in case of the impulse interference ...environment on the filter weight updates. The proposed APLM algorithm uses a robust cost function based on M-estimate and is derived by using the unconstrained minimization method. More importantly, the APLM algorithm has lower computational complexity than the M-estimate affine projection algorithm, since the direct or indirect inversion of the input signal matrix does not need to be calculated. In order to further improve the performance of the APLM algorithm, namely convergence speed and steady-state misalignment, the convex combination of the APLM (C-APLM) algorithm is presented. Simulation results verify that the proposed APLM and C-APLM algorithms are effective in system identification and echo cancellation scenarios. It also demonstrates that the C-APLM algorithm improves the filter performance in terms of the convergence speed and the normalized mean squared deviation in the presence of impulse noise.
While Pareto-based multiobjective optimization algorithms continue to show effectiveness for a wide range of practical problems that involve mostly two or three objectives, their limited application ...for many-objective problems, due to the increasing proportion of nondominated solutions and the lack of sufficient selection pressure, has also been gradually recognized. In this paper, we revive an early developed and computationally expensive strength Pareto-based evolutionary algorithm by introducing an efficient reference direction-based density estimator, a new fitness assignment scheme, and a new environmental selection strategy, for handling both multiobjective and many-objective problems. The performance of the proposed algorithm is validated and compared with some state-of-the-art algorithms on a number of test problems. Experimental studies demonstrate that the proposed method shows very competitive performance on both multiobjective and many-objective problems considered in this paper. Besides, our extensive investigations and discussions reveal an interesting finding, that is, diversity-first-and-convergence-second selection strategies may have great potential to deal with many-objective optimization.
We propose a filtering feature selection framework that considers subsets of features as paths in a graph, where a node is a feature and an edge indicates pairwise (customizable) relations among ...features, dealing with relevance and redundancy principles. By two different interpretations (exploiting properties of power series of matrices and relying on Markov chains fundamentals) we can evaluate the values of paths (i.e., feature subsets) of arbitrary lengths, eventually go to infinite, from which we dub our framework Infinite Feature Selection (Inf-FS). Going to infinite allows to constrain the computational complexity of the selection process, and to rank the features in an elegant way, that is, considering the value of any path (subset) containing a particular feature. We also propose a simple unsupervised strategy to cut the ranking, so providing the subset of features to keep. In the experiments, we analyze diverse settings with heterogeneous features, for a total of 11 benchmarks, comparing against 18 widely-known comparative approaches. The results show that Inf-FS behaves better in almost any situation, that is, when the number of features to keep are fixed a priori, or when the decision of the subset cardinality is part of the process.
•Linear discrepancy is shown to be Π2-hard.•The hardness holds even for approximation with any constant factor less than 9/8.•Together with a previous result of Li and Nikolov, Linear discrepancy is ...Π2-complete.
In this note, we prove that the problem of computing the linear discrepancy of a given matrix is Π2-hard, even to approximate within 9/8−ϵ factor for any ϵ>0. This strengthens the NP-hardness result of Li and Nikolov 9 for the exact version of the problem, and answers a question posed by them. Furthermore, since Li and Nikolov showed that the problem is contained in Π2, our result makes linear discrepancy another natural problem that is Π2-complete (to approximate).
Millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) has been regarded to be an emerging solution for the next generation of communications, in which hybrid analog and digital ...precoding is an important method for reducing the hardware complexity and energy consumption associated with mixed signal components. However, the fundamental limitations of the existing hybrid precoding schemes are that they have high-computational complexity and fail to fully exploit the spatial information. To overcome these limitations, this paper proposes a deep-learning-enabled mmWave massive MIMO framework for effective hybrid precoding, in which each selection of the precoders for obtaining the optimized decoder is regarded as a mapping relation in the deep neural network (DNN). Specifically, the hybrid precoder is selected through training based on the DNN for optimizing precoding process of the mmWave massive MIMO. Additionally, we present extensive simulation results to validate the excellent performance of the proposed scheme. The results exhibit that the DNN-based approach is capable of minimizing the bit error ratio and enhancing the spectrum efficiency of the mmWave massive MIMO, which achieves better performance in hybrid precoding compared with conventional schemes while substantially reducing the required computational complexity.