Dual decomposition methods are among the most prominent approaches for finding primal/dual saddle point solutions of resource allocation optimization problems. To deploy these methods in the emerging ...Internet of things networks, which will often have limited data rates, it is important to understand the communication overhead they require. Motivated by this, we introduce and explore two measures of communication complexity of dual decomposition methods to identify the most efficient communication among these algorithms. The first measure is €-complexity, which quantifies the minimal number of bits needed to find an €-accurate solution. The second measure is b-complexity, which quantifies the best possible solution accuracy that can be achieved from communicating b bits. We find the exact €and b-complexity of a class of resource allocation problems where a single supplier allocates resources to multiple users. For both the primal and dual problems, the €-complexity grows proportionally to log2 (1/€) and the b-complexity proportionally to 1/2 b . We also introduce a variant of the €- and b-complexity measures where only algorithms that ensure primal feasibility of the iterates are allowed. Such algorithms are often desirable because overuse of the resources can overload the respective systems, e.g., by causing blackouts in power systems. We provide upper and lower bounds on the convergence rate of these primal feasible complexity measures. In particular, we show that the b-complexity cannot converge at a faster rate than O(1/b). Therefore, the results demonstrate a tradeoff between fast convergence and primal feasibility. We illustrate the result by numerical studies.
As one of the key technologies of Versatile Video Coding (VVC), a flexible quad-tree with a nested multi-type tree (QTMT) partition structure significantly improves the rate-distortion (RD) ...performance. However, this structure brings additional complexity due to the recursive search for the best partition type. Traditional fast partition methods in previous encoders, cannot adapt to this new complex structure, because it's too complicated to predict each block size from one layer to another layer. Some indirect bottom-up designed methods are simple enough, but cannot predict specific split structures, making the acceleration capacity limited. Therefore, in this paper, we propose a learning-based approach to effectively predict the QTMT structure without having to heuristically explore the partitions of each layer. Firstly, we propose a hierarchy grid fully convolutional network (HG-FCN) framework, which concisely requires inference only once to obtain the entire partition information of the current CU and sub-CUs, and the inference is highly parallel. Secondly, we design a representation of complicated QTMT of CU partition in the form of hierarchy grid map (HGM), which can directly and effectively predict the specific hierarchical split structure. Lastly, a dual-threshold decision scheme is adopted to automatically control the trade-off between coding performance and complexity. Extensive experiments demonstrate the effectiveness of HG-FCN, which can reduce 51.15% <inline-formula> <tex-math notation="LaTeX">\sim ~65.53 </tex-math></inline-formula>% complexity of VVC intra coding with negligible 1.17% <inline-formula> <tex-math notation="LaTeX">\sim ~2.19 </tex-math></inline-formula>% BD-BR increase, superior to other state-of-the-art methods.
Fast Compressed Wideband Spectrum Sensing Wei, Ziping; Zhang, Han; Zhang, Yang ...
IEEE transactions on vehicular technology,
02/2024, Volume:
73, Issue:
2
Journal Article
Peer reviewed
Compressed wideband spectrum sensing has attracted much interest in recent years, which enables flexible spectrum sharing to improve the efficiency of scarce frequency resource. Despite the great ...potential for sub-Nyquist-rate sampling, existing high-accurate compression sensing (CS) methods unfortunately incur the extremely high computational complexity, e.g., in recovering the sparse signal or estimating the a priori information on sparsity. This creates a serious challenge in deploying real-time wideband sensing in the resource constraint platforms. In this work, we develop a fast compressed spectrum sensing method, which achieves accurate performance but also greatly reduces the computational complexity. Our new method jointly exploits the low-rank and sparse properties of a sub-Nyquist measurement matrix. We first design a low-complexity sparsity estimator, by approximating a large covariance matrix with multiple small matrices. To recover the sparse spectrum, we then formulate one low-dimensional non-convex optimization problem via random orthogonal projection, which makes the CS method more computationally efficient. As demonstrated on real datasets, our method reduces the computational complexity of wideband spectrum sensing by <inline-formula><tex-math notation="LaTeX">\sim\! 10\times</tex-math></inline-formula>; moreover, it achieves highly accurate results without compromising the reconstruction/sensing performance. Thus, it has great promise for real-time sub-Nyquist sensing on low-complexity platforms.
Chaotic systems are widely studied in various research areas such as signal processing and secure communication. Existing chaotic systems may have drawbacks such as discontinuous chaotic ranges and ...incomplete output distributions. These drawbacks may lead to the defects of some chaos-based applications. To accommodate these challenges, this paper proposes a two-dimensional (2D) modular chaotification system (2D-MCS) to improve the chaos complexity of any 2D chaotic map. Because the modular operation is a bounded transform, the improved chaotic maps by 2D-MCS can generate chaotic behaviors in wide parameter ranges while existing chaotic maps cannot. Three improved chaotic maps are presented as typical examples to verify the effectiveness of 2D-MCS. The chaos properties of one example of 2D-MCS are mathematically analyzed using the definition of Lyapunov exponent. Performance evaluations demonstrate that these improved chaotic maps have continuous and large chaotic ranges, and their outputs are distributed more uniformly than the outputs of existing 2D chaotic maps. To show the application of 2D-MCS, we apply the improved chaotic maps of 2D-MCS to secure communication. The simulation results show that these improved chaotic maps exhibit better performance than several existing and newly developed chaotic maps in terms of resisting different channel noise.
Complexity is ubiquitous in modern engineering and project management. It is traditionally associated with failure. Also, complexity works! It delivers functionality, creativity, innovation. ...Complexity management contributes to the success of high-risk IT projects, helps better project understanding, allows for better prioritization and planning of resources. Managing negative complexity reduces project risk. Positive and appropriate complexity are catalysts for opportunities. This paper is a qualitative longitudinal study based on multiple industry project cases, consisting in the repeated evaluation of a set of complexity management tools. The tools were deployed in a classical process framework: plan, identify, analyze, plan responses, monitor, and control. The evaluated tools red-flag and measure complexity, analyze its sources and effects, and plan mitigation strategies. The study aims to provide project managers with methods for increasing project success rates and reducing failure in complex IT project environments.
Matrix-factorization (MF)-based approaches prove to be highly accurate and scalable in addressing collaborative filtering (CF) problems. During the MF process, the non-negativity, which ensures good ...representativeness of the learnt model, is critically important. However, current non-negative MF (NMF) models are mostly designed for problems in computer vision, while CF problems differ from them due to their extreme sparsity of the target rating-matrix. Currently available NMF-based CF models are based on matrix manipulation and lack practicability for industrial use. In this work, we focus on developing an NMF-based CF model with a single-element-based approach. The idea is to investigate the non-negative update process depending on each involved feature rather than on the whole feature matrices. With the non-negative single-element-based update rules, we subsequently integrate the Tikhonov regularizing terms, and propose the regularized single-element-based NMF (RSNMF) model. RSNMF is especially suitable for solving CF problems subject to the constraint of non-negativity. The experiments on large industrial datasets show high accuracy and low-computational complexity achieved by RSNMF.
A large-scale fully digital receive antenna array can provide very high-resolution direction of arrival (DOA) estimation, but resulting in a significantly high RF-chain circuit cost. Thus, a hybrid ...analog and digital (HAD) structure is preferred. Two phase alignment (PA) methods, HAD PA (HADPA) and hybrid digital and analog PA (HDAPA), are proposed to estimate DOA based on the parametric method. Compared to analog PA (APA), they can significantly reduce the complexity in the PA phases. Subsequently, a fast root multiple signal classification HDAPA (root-MUSIC-HDAPA) method is proposed specially for this hybrid structure to implement an approximately analytical solution. Due to the HAD structure, there exists the effect of direction-finding ambiguity. A smart strategy of maximizing the average receive power is adopted to delete those spurious solutions and preserve the true optimal solution by linear searching over a set of limited finite candidate directions. This results in a significant reduction in computational complexity. Eventually, the Cramer-Rao lower bound (CRLB) of finding emitter direction using the HAD structure is derived. Simulation results show that our proposed methods, root-MUSIC-HDAPA and HDAPA, can achieve the hybrid CRLB with their complexities being significantly lower than those of pure linear searching-based methods, such as APA.
In this letter, the impact of two phase shifting designs, namely coherent phase shifting and random discrete phase shifting, on the performance of intelligent reflecting surface (IRS) assisted ...non-orthogonal multiple access (NOMA) is studied. Analytical and simulation results are provided to show that the two designs achieve different tradeoffs between reliability and complexity. To further improve the reception reliability of the random phase shifting design, a low-complexity phase selection scheme is also proposed in this letter.
•Multivariate permutation Lempel-Ziv complexity (MvPLZC) is first proposed.•The multivariate threshold adjusted PLZC (MvTAPLZC) is proposed to improve the pattern representation accuracy of ...MvPLZC.•The proposed method always exhibits the best feature extraction capability.
Permutation Lempel-Ziv complexity (PLZC) has emerged as an efficient methodology for analyzing the complexity of single-channel time series. However, it is not possible for PLZC to comprehensively characterize complexity information of multichannel time series. To address this limitation, we propose multivariate PLZC (MvPLZC) by introducing multivariate embedding theory, which can comprehensively characterize multichannel complexity by fusing the permutation patterns both inside and between channels. Furthermore, we propose multivariate threshold adjusted PLZC (MvTAPLZC) to improve the pattern representation accuracy of MvPLZC, which can increase the precision of complexity estimation by applying a threshold extended permutation pattern. Three sets of simulation experiments indicate that MvTAPLZC has superior stability, robustness and differentiation capability, and two sets of realistic experiments demonstrate the advantages of MvTAPLZC in distinguishing different types of bearing fault signals.
Famous trials not only generate immense popularity and intrigue, they also have the power to change history. Surprisingly, little research examines the use of complex language during these ...culturally-significant trial outcomes. In the present study, we helped fill in this gap by evaluating the relationship between attorneys’ use of integratively complex language and trial outcomes. Using the well-validated Automated Integrative Complexity scoring system, we analyzed the complexity of language in the opening and closing statements of famous trials. We found that higher levels of integrative complexity led to a significant increase in winning outcomes, but only for the prosecution. Further, this effect was driven by elaborative forms of complexity and not dialectical forms of complexity. Taken together, these results fill a large gap in our understanding of how language might influence the outcomes of culturally-significant legal proceedings.