The golden ticket Fortnow, Lance
2013., 20130327, 2013, 2013-03-27, c2013
eBook
The P-NP problem is the most important open problem in computer science, if not all of mathematics.The Golden Ticketprovides a nontechnical introduction to P-NP, its rich history, and its algorithmic ...implications for everything we do with computers and beyond. In this informative and entertaining book, Lance Fortnow traces how the problem arose during the Cold War on both sides of the Iron Curtain, and gives examples of the problem from a variety of disciplines, including economics, physics, and biology. He explores problems that capture the full difficulty of the P-NP dilemma, from discovering the shortest route through all the rides at Disney World to finding large groups of friends on Facebook. But difficulty also has its advantages. Hard problems allow us to safely conduct electronic commerce and maintain privacy in our online lives.
The Golden Ticketexplores what we truly can and cannot achieve computationally, describing the benefits and unexpected challenges of the P-NP problem.
We describe a new approach for understanding the difficulty of designing efficient learning algorithms. We prove that the existence of an efficient learning algorithm for a circuit class
C in ...Angluin's model of exact learning from membership and equivalence queries or in Valiant's PAC model yields a lower bound against
C. More specifically, we prove that any subexponential time, deterministic exact learning algorithm for
C (from membership and equivalence queries) implies the existence of a function
f in
EXP
NP
such that
f
∉
C
. If
C is PAC learnable with membership queries under the uniform distribution or exact learnable in randomized polynomial-time, we prove that there exists a function
f
∈
BPEXP
(the exponential time analog of
BPP
) such that
f
∉
C
.
For
C equal to polynomial-size, depth-two threshold circuits (i.e., neural networks with a polynomial number of hidden nodes), our result shows that efficient learning algorithms for this class would solve one of the most challenging open problems in computational complexity theory: proving the existence of a function in
EXP
NP
or
BPEXP
that cannot be computed by circuits from
C. We are not aware of any representation-independent hardness results for learning depth-2, polynomial-size neural networks with respect to the uniform distribution. Our approach uses the framework of the breakthrough result due to Kabanets and Impagliazzo showing that derandomizing
BPP
yields non-trivial circuit lower bounds.
The OR-SAT problem asks, given Boolean formulae
ϕ
1
,
…
,
ϕ
m
each of size at most
n, whether at least one of the
ϕ
i
's is satisfiable. We show that there is no reduction from OR-SAT to any set
A ...where the length of the output is bounded by a polynomial in
n, unless
NP
⊆
coNP
/
poly
, and the Polynomial-Time Hierarchy collapses. This result settles an open problem proposed by Bodlaender et al. (2008)
6 and Harnik and Naor (2006)
20 and has a number of implications. (i) A number of parametric
NP
problems, including Satisfiability, Clique, Dominating Set and Integer Programming, are not
instance compressible or
polynomially kernelizable unless
NP
⊆
coNP
/
poly
. (ii) Satisfiability does not have PCPs of size polynomial in the number of variables unless
NP
⊆
coNP
/
poly
. (iii) An approach of Harnik and Naor to constructing collision-resistant hash functions from one-way functions is unlikely to be viable in its present form. (iv) (Buhrman–Hitchcock) There are no subexponential-size hard sets for NP unless NP is in co-NP/poly. We also study
probabilistic variants of compression, and show various results about and connections between these variants. To this end, we introduce a new strong derandomization hypothesis, the Oracle Derandomization Hypothesis, and discuss how it relates to traditional derandomization assumptions.
We define a new notion of “robust simulations” between complexity classes which is intermediate between the traditional notions of infinitely-often and almost-everywhere, as well as a corresponding ...notion of “significant separations”. A language L has a robust simulation in a complexity class C if there is a language in C which agrees with L on arbitrarily large polynomial stretches of input lengths.
We show that various implications in complexity theory such as the collapse of PH if NP=P and the Karp–Lipton theorem have analogues for robust simulations. We then use these results to prove that most known separations in complexity theory can be strengthened to significant separations, though in each case, an almost everywhere separation is unknown.
Proving our results requires several new ideas, including a completely different proof of the hierarchy theorem for non-deterministic polynomial time than the ones previously known.
We introduce
Computational Depth, a measure for the amount of “nonrandom” or “useful” information in a string by considering the difference of various Kolmogorov complexity measures. We investigate ...three instantiations of Computational Depth:
•
Basic Computational Depth, a clean notion capturing the spirit of Bennett's Logical Depth. We show that a Turing machine
M runs in time polynomial on average over the time-bounded universal distribution if and only if for all inputs
x,
M uses time exponential in the basic computational depth of
x.
•
Sublinear-time Computational Depth and the resulting concept of Shallow Sets, a generalization of sparse and random sets based on low depth properties of their characteristic sequences. We show that every computable set that is reducible to a shallow set has polynomial-size circuits.
•
Distinguishing Computational Depth, measuring when strings are easier to recognize than to produce. We show that if a Boolean formula has a nonnegligible fraction of its satisfying assignments with low depth, then we can find a satisfying assignment efficiently.
We use the powerful tools of counting complexity and generic oracles to help understand the limitations of the complexity of quantum computation. We show several results for the probabilistic quantum ...class BQP:BQP is low for PP, i.e., PPBQP=PP; There exists a relativized, world, where P=BQP and the polynomial-time hierarchy is infinite; There exists a relativized world, where BQP does not have complete sets; There exists a relativized world, where P=BQP, but P≠UP∩coUP and one-way functions exist. This gives a relativized answer to an open question of Simon.
According to economic theory—supported by empirical and laboratory evidence—the equilibrium price of a financial security reflects all of the information regarding the security's value. We ...investigate the computational process on the path toward equilibrium, where information distributed among traders is revealed step-by-step over time and incorporated into the market price. We develop a simplified model of an information market, along with trading strategies, in order to formalize the computational properties of the process. We show that securities whose payoffs cannot be expressed as weighted threshold functions of distributed input bits are not guaranteed to converge to the proper equilibrium predicted by economic theory. On the other hand, securities whose payoffs are threshold functions
are guaranteed to converge, for all prior probability distributions. Moreover, these
threshold securities converge in at most
n
rounds, where
n
is the number of bits of distributed information. We also prove a lower bound, showing a type of threshold security that requires at least
n
/
2
rounds to converge in the worst case.