Descriptive complexity provides intrinsic, that is,machine-independent, characterizations of the major complexity classes. On the other hand, logic can be useful for designing programs in a natural ...declarative way. This is particularly important for parallel computation models such as cellular automata, because designing parallel programs is considered a difficult task.This paper establishes three logical characterizations of the three classical complexity classes modeling minimal time, called real-time, of one-dimensional cellular automata according to their canonical variants: unidirectional or bidirectional communication, input word given in a parallel or sequential way.Our three logics are natural restrictions of existential second-order Horn logic with built-in successor and predecessor functions. These logics correspond exactly to the three ways of deciding a language on a square grid circuit of side n according to one of the three canonical locations of an input word of length n: along a side of the grid, on the diagonal that contains the output cell, or on the diagonal opposite to the output cell.The key ingredient of our results is a normalization method that transforms a formula from one of our three logics into an equivalent normalized formula that faithfully mimics a grid circuit.Then, we extend our logics by allowing a limited use of negation on hypotheses like in Stratified Datalog. By revisiting in detail a number of representative classical problems - recognition of the set of primes by Fisher’s algorithm, Dyck language recognition, Firing Squad Synchronization problem,etc. - we show that this extension makes easier programming and we prove that it does not change the complexity of our logics in real-time.Finally, starting from our experience in expressing those representative problems in logic, we argue that our logics are high-level programming languages: they allow to express in a natural,precise and synthetic way the algorithms of literature, based on signals, and to translate them automatically into cellular automata of the same complexity.
During the last four decades, digital technologies have disrupted many industries. Car control systems have gone from mechanical to digital. Telephones have changed from sound boxes to portable ...computers. But have the firms that digitized their products and services become more valuable than firms that didn't? Here we introduce the construct of digital proximity, which considers the interdependent activities of firms linked in an economic network. We then explore how the digitization of products and services affects a company's Tobin's q-the ratio of market value over assets-a measure of the intangible value of a firm. Our panel regression methods and robustness tests suggest the positive influence of a firm's digital proximity on its Tobin's q. This implies that firms able to come closer to the digital sector have increased their intangible value compared to those that have failed to do so. These findings contribute a new way of measuring digitization and its impact on firm performance that is complementary to traditional measures of information technology (IT) intensity.
In this paper, we consider secure downlink transmission in a multicell massive multiple-input multiple-output (MIMO) system where the numbers of base station (BS) antennas, mobile terminals, and ...eavesdropper antennas are asymptotically large. The channel state information of the eavesdropper is assumed to be unavailable at the BS and hence, linear precoding of data and artificial noise (AN) are employed for secrecy enhancement. Four different data precoders (i.e., selfish zero-forcing (ZF)/regularized channel inversion (RCI) and collaborative ZF/RCI precoders) and three different AN precoders (i.e., random, selfish/collaborative null-space-based precoders) are investigated and the corresponding achievable ergodic secrecy rates are analyzed. Our analysis includes the effects of uplink channel estimation, pilot contamination, multicell interference, and path-loss. Furthermore, to strike a balance between complexity and performance, linear precoders that are based on matrix polynomials are proposed for both data and AN precoding. The polynomial coefficients of the data and AN precoders are optimized, respectively, for minimization of the sum-mean-squared-error of and the AN leakage to the mobile terminals in the cell of interest using tools from free probability and random matrix theory. Our analytical and simulation results provide interesting insights for the design of secure multicell massive MIMO systems and reveal that the proposed polynomial data and AN precoders closely approach the performance of selfish RCI data and null-space-based AN precoders, respectively.
In this work, we propose a new deep image compression framework that aims to learn one single network to support variable bitrate coding under various computational complexity levels. In contrast to ...the existing state-of-the-art learning-based image compression frameworks that only consider the rate-distortion trade-off without introducing any constraint related to the computational complexity, our Complexity and Bitrate Adaptive Network (CBANet) considers the complex rate-distortion-complexity trade-off when learning a single network to support multiple computational complexity levels and variable bitrates. Since it is a non-trivial task to solve such a rate-distortion-complexity related optimization problem, we propose a two-step approach to decouple this complex optimization task into a complexity-distortion optimization sub-task and a rate-distortion optimization sub-task, and additionally propose a new network design strategy by introducing a Complexity Adaptive Module (CAM) and a Bitrate Adaptive Module (BAM) to respectively achieve the complexity-distortion and rate-distortion trade-offs. As a general approach, our network design strategy can be readily incorporated into different deep image compression methods to achieve complexity and bitrate adaptive image compression by using a single network. Comprehensive experiments on two benchmark datasets demonstrate the effectiveness of our CBANet for deep image compression. Code is released at https://github.com/JinyangGuo/CBANet-release.