Descriptive complexity provides intrinsic, that is,machine-independent, characterizations of the major complexity classes. On the other hand, logic can be useful for designing programs in a natural ...declarative way. This is particularly important for parallel computation models such as cellular automata, because designing parallel programs is considered a difficult task.This paper establishes three logical characterizations of the three classical complexity classes modeling minimal time, called real-time, of one-dimensional cellular automata according to their canonical variants: unidirectional or bidirectional communication, input word given in a parallel or sequential way.Our three logics are natural restrictions of existential second-order Horn logic with built-in successor and predecessor functions. These logics correspond exactly to the three ways of deciding a language on a square grid circuit of side n according to one of the three canonical locations of an input word of length n: along a side of the grid, on the diagonal that contains the output cell, or on the diagonal opposite to the output cell.The key ingredient of our results is a normalization method that transforms a formula from one of our three logics into an equivalent normalized formula that faithfully mimics a grid circuit.Then, we extend our logics by allowing a limited use of negation on hypotheses like in Stratified Datalog. By revisiting in detail a number of representative classical problems - recognition of the set of primes by Fisher’s algorithm, Dyck language recognition, Firing Squad Synchronization problem,etc. - we show that this extension makes easier programming and we prove that it does not change the complexity of our logics in real-time.Finally, starting from our experience in expressing those representative problems in logic, we argue that our logics are high-level programming languages: they allow to express in a natural,precise and synthetic way the algorithms of literature, based on signals, and to translate them automatically into cellular automata of the same complexity.
During the last four decades, digital technologies have disrupted many industries. Car control systems have gone from mechanical to digital. Telephones have changed from sound boxes to portable ...computers. But have the firms that digitized their products and services become more valuable than firms that didn't? Here we introduce the construct of digital proximity, which considers the interdependent activities of firms linked in an economic network. We then explore how the digitization of products and services affects a company's Tobin's q-the ratio of market value over assets-a measure of the intangible value of a firm. Our panel regression methods and robustness tests suggest the positive influence of a firm's digital proximity on its Tobin's q. This implies that firms able to come closer to the digital sector have increased their intangible value compared to those that have failed to do so. These findings contribute a new way of measuring digitization and its impact on firm performance that is complementary to traditional measures of information technology (IT) intensity.
The complexity of constrained min-max optimization Daskalakis, Constantinos; Skoulakis, Stratis; Zampetakis, Manolis
Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing,
06/2021
Conference Proceeding
Recenzirano
Odprti dostop
Despite its important applications in Machine Learning, min-max optimization of objective functions that are nonconvex-nonconcave remains elusive. Not only are there no known first-order methods ...converging to even approximate local min-max equilibria (a.k.a. approximate saddle points), but the computational complexity of identifying them is also poorly understood. In this paper, we provide a characterization of the computational complexity as well as of the limitations of first-order methods in this problem. Specifically, we show that in linearly constrained min-max optimization problems with nonconvex-nonconcave objectives an approximate local min-max equilibrium of large enough approximation is guaranteed to exist, but computing such a point is PPAD-complete. The same is true of computing an approximate fixed point of the (Projected) Gradient Descent/Ascent update dynamics, which is computationally equivalent to computing approximate local min-max equilibria. An important byproduct of our proof is to establish an unconditional hardness result in the Nemirovsky-Yudin 1983 oracle optimization model, where we are given oracle access to the values of some function f : P → −1, 1 and its gradient ∇ f, where P ⊆ 0, 1d is a known convex polytope. We show that any algorithm that uses such first-order oracle access to f and finds an ε-approximate local min-max equilibrium needs to make a number of oracle queries that is exponential in at least one of 1/ε, L, G, or d, where L and G are respectively the smoothness and Lipschitzness of f. This comes in sharp contrast to minimization problems, where finding approximate local minima in the same setting can be done with Projected Gradient Descent using O(L/ε) many queries. Our result is the first to show an exponential separation between these two fundamental optimization problems in the oracle model.
We define a new asymmetric partizan loopy combinatorial game, Forced-Capture Hnefatafl, similar to Hnefatafl, except that players are forced to make capturing moves when available. We show that this ...game is PSPACE-hard using a reduction from Positive-CNF, making progress towards classifying the computational complexity of proper Hnefatafl.
The concept of nimbers—a.k.a. Grundy-values or nim-values—is fundamental to combinatorial game theory. Beyond the winnability, nimbers provide a complete characterization of strategic interactions ...among impartial games in disjunctive sums. In this paper, we consider nimber-preserving reductions among impartial games, which enhance the winnability-preserving reductions in traditional computational characterizations of combinatorial games. We prove that Generalized Geography is complete for the natural class, IP, of polynomially-short impartial rulesets, under polynomial-time nimber-preserving reductions. We refer to this notion of completeness as Sprague-Grundy-completeness. In contrast, we also show that not every PSPACE-complete ruleset in IP is Sprague-Grundy-complete for IP.
By viewing every impartial game as an encoding of its nimber—a succinct game secret richer than its winnability alone—our technical result establishes the following striking cryptography-inspired homomorphic theorem: Despite the PSPACE-completeness of nimber computation for IP, there exists a polynomial-time algorithm to construct, for any pair of games G1,G2 in IP, a Generalized Geography game G satisfying:nimber(G)=nimber(G1)⊕nimber(G2).
One way to define the Matching Cut problem is: Given a graph G, is there an edge-cut M of G such that M is an independent set in the line graph of G? We propose the more general Conflict-Free Cut ...problem: Together with the graph G, we are given a so-called conflict graph Gˆ on the edges of G, and we ask for an edge-cutset M of G that is independent in Gˆ. Since conflict-free settings are popular generalizations of classical optimization problems and Conflict-Free Cut was not considered in the literature so far, we start the study of the problem. We show that the problem is NP-complete even when the maximum degree of G is 5 and Gˆ is 1-regular. The same reduction implies an exponential lower bound on the solvability based on the Exponential Time Hypothesis. We also give parameterized complexity results: We show that the problem is fixed-parameter tractable with the vertex cover number of G as a parameter, and we show W1-hardness even when G has a feedback vertex set of size one, and the clique cover number of Gˆ is the parameter. Since the clique cover number of Gˆ is an upper bound on the independence number of Gˆ and thus the solution size, this implies W1-hardness when parameterized by the cut size. We list polynomial-time solvable cases and interesting open problems. At last, we draw a connection to a symmetric variant of SAT.
•We propose Conflict-Free Cut, which is a very natural generalization of Matching Cut.•Conflict-Free Cut remains NP-complete, even when Δ(G)≤5 and Gˆ is 1-regular.•Conflict-Free Cut is not solvable in 2o(n+m) time unless ETH fails.•We consider parameterized variants of Conflict-Free Cut and give first results.•Conflict-Free Cut is related to SAT and solvable in 1.4769n+o(n)-time.
In this paper, we consider secure downlink transmission in a multicell massive multiple-input multiple-output (MIMO) system where the numbers of base station (BS) antennas, mobile terminals, and ...eavesdropper antennas are asymptotically large. The channel state information of the eavesdropper is assumed to be unavailable at the BS and hence, linear precoding of data and artificial noise (AN) are employed for secrecy enhancement. Four different data precoders (i.e., selfish zero-forcing (ZF)/regularized channel inversion (RCI) and collaborative ZF/RCI precoders) and three different AN precoders (i.e., random, selfish/collaborative null-space-based precoders) are investigated and the corresponding achievable ergodic secrecy rates are analyzed. Our analysis includes the effects of uplink channel estimation, pilot contamination, multicell interference, and path-loss. Furthermore, to strike a balance between complexity and performance, linear precoders that are based on matrix polynomials are proposed for both data and AN precoding. The polynomial coefficients of the data and AN precoders are optimized, respectively, for minimization of the sum-mean-squared-error of and the AN leakage to the mobile terminals in the cell of interest using tools from free probability and random matrix theory. Our analytical and simulation results provide interesting insights for the design of secure multicell massive MIMO systems and reveal that the proposed polynomial data and AN precoders closely approach the performance of selfish RCI data and null-space-based AN precoders, respectively.
We prove that a maintenance problem on frequency-constrained maintenance jobs with a hierarchical structure is integer-factorization hard. This result holds even on simple systems with just two ...components to maintain. As a corollary, we provide a first hardness result for Levi et al.'s modular maintenance scheduling problem (Levi et al. 2014).
Heged\H{u}s's lemma is the following combinatorial statement regarding polynomials over finite fields. Over a field $\mathbb{F}$ of characteristic $p > 0$ and for $q$ a power of $p$, the lemma says ...that any multilinear polynomial $P\in \mathbb{F}x_1,\ldots,x_n$ of degree less than $q$ that vanishes at all points in $\{0,1\}^n$ of some fixed Hamming weight $k\in q,n-q$ must also vanish at all points in $\{0,1\}^n$ of weight $k + q$. This lemma was used by Heged\H{u}s (2009) to give a solution to \emph{Galvin's problem}, an extremal problem about set systems; by Alon, Kumar and Volk (2018) to improve the best-known multilinear circuit lower bounds; and by Hrube\v{s}, Ramamoorthy, Rao and Yehudayoff (2019) to prove optimal lower bounds against depth-$2$ threshold circuits for computing some symmetric functions. In this paper, we formulate a robust version of Heged\H{u}s's lemma. Informally, this version says that if a polynomial of degree $o(q)$ vanishes at most points of weight $k$, then it vanishes at many points of weight $k+q$. We prove this lemma and give three different applications.