Previously referred to as 'miraculous' in the scientific literature because of its powerful properties and its wide application as optimal solution to the problem of induction/inference, ...(approximations to) Algorithmic Probability (AP) and the associated Universal Distribution are (or should be) of the greatest importance in science. Here we investigate the emergence, the rates of emergence and convergence, and the Coding-theorem like behaviour of AP in Turing-subuniversal models of computation. We investigate empirical distributions of computing models in the Chomsky hierarchy. We introduce measures of algorithmic probability and algorithmic complexity based upon resource-bounded computation, in contrast to previously thoroughly investigated distributions produced from the output distribution of Turing machines. This approach allows for numerical approximations to algorithmic (Kolmogorov-Chaitin) complexity-based estimations at each of the levels of a computational hierarchy. We demonstrate that all these estimations are correlated in rank and that they converge both in rank and values as a function of computational power, despite fundamental differences between computational models. In the context of natural processes that operate below the Turing universal level because of finite resources and physical degradation, the investigation of natural biases stemming from algorithmic rules may shed light on the distribution of outcomes. We show that up to 60% of the simplicity/complexity bias in distributions produced even by the weakest of the computational models can be accounted for by Algorithmic Probability in its approximation to the Universal Distribution.
Mode division multiplexing (MDM) using orbital angular momentum (OAM) is a recently developed physical layer transmission technique, which has obtained intensive interest among optics, ...millimeter-wave, and radio frequency due to its capability to enhance communication capacity while retaining an ultra-low receiver complexity. In this paper, the system model based on OAM-MDM is mathematically analyzed and it is theoretically concluded that such system architecture can bring a vast reduction in receiver complexity without capacity penalty compared with conventional line-of-sight multiple-inmultiple-out systems under the same physical constraint. Furthermore, a 4×4 OAM-MDM communication experiment adopting a pair of easily realized Cassegrain reflector antennas capable of multiplexing/demultiplexing four orthogonal OAM modes of l = -3, -2, +2, and +3 is carried out at a microwave frequency of 10 GHz. The experimental results show high spectral efficiency as well as low receiver complexity.
One of the major open problems in complexity theory is proving super-logarithmic lower bounds on the depth of circuits (i.e., P \nsubseteq NC^{1}). Karchmer, Raz, and Wigderson (Computational ...Complexity 5(3/4), 1995) suggested approaching this problem by proving that depth complexity of a composition of functions f \diamond g is roughly the sum of the depth complexities of f and g. They showed that the validity of this conjecture would imply that \mathbf{P} \nsubseteq \mathbf{N C}^{1}. The intuition that underlies the KRW conjecture is that the composition f \diamond g should behave like a "direct-sum problem", in a certain sense, and therefore the depth complexity of f \diamond g should be the sum of the individual depth complexities. Nevertheless, there are two obstacles toward turning this intuition into a proof: first, we do not know how to prove that f \diamond g must behave like a direct-sum problem; second, we do not know how to prove that the complexity of the latter direct-sum problem is indeed the sum of the individual complexities. In this work, we focus on the second obstacle. To this end, we study a notion called "strong composition", which is the same as f \diamond g except that it is forced to behave like a direct-sum problem. We prove a variant of the KRW conjecture for strong composition, thus overcoming the above second obstacle. This result demonstrates that the first obstacle above is the crucial barrier toward resolving the KRW conjecture. Along the way, we develop some general techniques that might be of independent interest.
In linguistics, there is little consensus on how to define, measure, and compare complexity across languages. We propose to take the diversity of viewpoints as a given, and to capture the complexity ...of a language by a vector of measurements, rather than a single value. We then assess the statistical support for two controversial hypotheses: the trade-off hypothesis and the equi-complexity hypothesis. We furnish meta-analyses of 28 complexity metrics applied to texts written in overall 80 typologically diverse languages. The trade-off hypothesis is partially supported, in the sense that around one third of the significant correlations between measures are negative. The equi-complexity hypothesis, on the other hand, is largely confirmed. While we find evidence for complexity differences in the domains of morphology and syntax, the overall complexity vectors of languages turn out virtually indistinguishable.
The increasing capacity of storage devices and the growing frequency of correlated failures demands better fault tolerance by using maximum distance separable (MDS) array codes with triple-parity. ...Although many constructions of triple-parity MDS array codes have been proposed, they either have large update complexity or have large encoding/decoding complexity or only support specified parameters. In this paper, we propose two classes of triple-parity MDS array codes, called extended EVENODD+ and STAR+ that both have asymptotically optimal update complexity and lower encoding/decoding complexity. We show that the existing extended EVENODD and STAR are a special case of our extended EVENODD+ and STAR+, respectively. Moreover, we show that our extended EVENODD+ and STAR+ have strictly less encoding/decoding/update complexity than that of extended EVENODD and STAR for most parameters.
Relative to the large literature on upper bounds on complexity of convex optimization, lesser attention has been paid to the fundamental hardn4516420ess of these problems. Given the extensive use of ...convex optimization in machine learning and statistics, gaining an understanding of these complexity-theoretic issues is important. In this paper, we study the complexity of stochastic convex optimization in an oracle model of computation. We introduce a new notion of discrepancy between functions, and use it to reduce problems of stochastic convex optimization to statistical parameter estimation, which can be lower bounded using information-theoretic methods. Using this approach, we improve upon known results and obtain tight minimax complexity estimates for various function classes.
For uplink large-scale multiple-input-multiple-output (MIMO) systems, the minimum mean square error (MMSE) algorithm is near optimal but involves matrix inversion with high complexity. In this paper, ...we propose to exploit the Gauss-Seidel (GS) method to iteratively realize the MMSE algorithm without the complicated matrix inversion. To further accelerate the convergence rate and reduce the complexity, we propose a diagonal-approximate initial solution to the GS method, which is much closer to the final solution than the traditional zero-vector initial solution. We also propose an approximated method to compute log-likelihood ratios for soft channel decoding with a negligible performance loss. The analysis shows that the proposed GS-based algorithm can reduce the computational complexity from O(K 3 ) to O(K 2 ), where K is the number of users. Simulation results verify that the proposed algorithm outperforms the recently proposed Neumann series approximation algorithm and achieves the near-optimal performance of the classical MMSE algorithm with a small number of iterations.
Retirement has been associated with cognitive decline. However, the influence of specific job characteristics like occupational complexity on post-retirement cognitive outcomes is not well ...understood. Data from the Midlife in the United States (MIDUS) study were used to examine occupational complexity in relation to cognitive performance and cognitive change after retirement. Initial sample included 471 workers between 45 and 75 years of age. At 9-year follow-up (T2), 149 were retired and 322 were still working. All six tasks from the Brief Test of Adult Cognition by Telephone (BTACT) were used. Hierarchical regression with workers at T1 indicated that, controlling for sociodemographic variables, complexity of work with people significantly contributed to explaining variance in overall cognitive performance (1.7%) and executive function (2%). In Latent Change Score (LCS) models, complexity of work with people was the only significant predictor of cognitive change in retirees, with those retiring from high-complexity jobs showing less decline. In conclusion, high complexity of work with people is related to better executive functioning and overall cognition during working life and slower decline after retirement. The finding that more intellectually stimulating work carries cognitive advantage into retirement fits the cognitive reserve concept, where earlier intellectual stimulation brings about lower risks of cognitive problems later. Study results also go along with the unengaged lifestyle hypothesis, whereby people may slip into so-called "mental retirement," leading to post-retirement cognitive loss, which may be most apparent among those retiring from jobs with low complexity of work with people.
Systems change requires complex interventions. Cross-sector partnerships (CSPs) face the daunting task of addressing complex societal problems by aligning different backgrounds, values, ideas and ...resources. A major challenge for CSPs is how to link the type of partnership to the intervention needed to drive change. Intervention strategies are thereby increasingly based on Theories of Change (ToCs). Applying ToCs is often a donor requirement, but it also reflects the ambition of a partnership to enhance its transformative potential. The current use of ToCs in partnering efforts varies greatly. There is a tendency for a linear and relatively simple use of ToCs that does limited justice to the complexity of the problems partnerships aim to address. Since partnership dynamics are already complex and challenging themselves, confusion and disagreement over the appropriate application of ToCs is likely to hamper rather than enhance the transformative potential of partnerships. We develop a complexity alignment framework and a diagnostic tool that enables partnerships to better appreciate the complexity of the context in which they operate, allowing them to adjust their learning strategy. This paper applies recent insights into how to deal with complexity from both the evaluation and theory of change fields to studies investigating the transformative capacity of partnerships. This can (1) serve as a check to define the challenges of partnering projects and (2) can help delineate the societal sources and layers of complexity that cross-sector partnerships deal with such as failure, insufficient responsibility taking and collective action problems at four phases of partnering.