We study a variant of the Cops and Robber game in which the robber has unbounded speed, i.e. can take any path from her vertex in her turn, but she is not allowed to pass through a vertex occupied by ...a cop. Let c∞(G) denote the number of cops needed to capture the robber in graph G in this variant. We show that if G is an interval graph, then c∞(G)∈O(|V(G)|); while for every n there exists an n-vertex chordal graph G with c∞(G)∈Ω(n/logn). This indicates a large contrast between interval graphs and chordal graphs in this variant, whereas in the original Cops and Robber game a single cop suffices for both classes.
We prove logarithmic upper bounds for the diameters of the random-surfer Webgraph model and the PageRank-based selection Webgraph model, confirming the small world phenomenon holds for them. In the ...special case when the generated graph is a tree, we provide close lower and upper bounds for the diameters of both models.
We consider a variant of the Cops and Robber game, in which the robber has unbounded speed, that is, can take any path from her vertex in her turn, but she is not allowed to pass through a vertex ...occupied by a cop. Let c∞(G) denote the number of cops needed to capture the robber in a graph G in this variant, and let tw (G) denote the treewidth of G. We show that if G is planar then c∞(G)=Θ( tw (G)), and there is a polynomial‐time constant‐factor approximation algorithm for computing c∞(G). We also determine, up to constant factors, the value of c∞(G) of the Erdős–Rényi random graph G=G(n,p) for all admissible values of p, and show that when the average degree is ω(1), c∞(G) is typically asymptotic to the domination number.
We introduce a novel technique for distribution learning based on a notion of
sample compression
. Any class of distributions that allows such a compression scheme can be learned with few samples. ...Moreover, if a class of distributions has such a compression scheme, then so do the classes of
products
and
mixtures
of those distributions.
As an application of this technique, we prove that ˜Θ(
kd
2
/ε
2
) samples are necessary and sufficient for learning a mixture of
k
Gaussians in R
d
, up to error ε in total variation distance. This improves both the known upper bounds and lower bounds for this problem. For mixtures of axis-aligned Gaussians, we show that Õ(
kd
/ε
2
) samples suffice, matching a known lower bound. Moreover, these results hold in an agnostic learning (or robust estimation) setting, in which the target distribution is only approximately a mixture of Gaussians. Our main upper bound is proven by showing that the class of Gaussians in R
d
admits a small compression scheme.