The
(
1
+
(
λ
,
λ
)
)
genetic algorithm proposed in Doerr et al. (Theor Comput Sci 567:87–104,
2015
) is one of the few examples for which a super-constant speed-up of the expected optimization time ...through the use of crossover could be rigorously demonstrated. It was proven that the expected optimization time of this algorithm on
OneMax
is
O
(
max
{
n
log
(
n
)
/
λ
,
λ
n
}
)
for any offspring population size
λ
∈
{
1
,
…
,
n
}
(and the other parameters suitably dependent on
λ
) and it was shown experimentally that a self-adjusting choice of
λ
leads to a better, most likely linear, runtime. In this work, we study more precisely how the optimization time depends on the parameter choices, leading to the following results on how to optimally choose the population size, the mutation probability, and the crossover bias both in a static and a dynamic fashion. For the mutation probability and the crossover bias depending on
λ
as in Doerr et al. (
2015
), we improve the previous runtime bound to
O
(
max
{
n
log
(
n
)
/
λ
,
n
λ
log
log
(
λ
)
/
log
(
λ
)
}
)
. This expression is minimized by a value of
λ
slightly larger than what the previous result suggested and gives an expected optimization time of
O
n
log
(
n
)
log
log
log
(
n
)
/
log
log
(
n
)
. We show that no static choice in the three-dimensional parameter space of offspring population, mutation probability, and crossover bias gives an asymptotically better runtime. We also prove that the self-adjusting parameter choice suggested in Doerr et al. (
2015
) outperforms all static choices and yields the conjectured linear expected runtime. This is asymptotically optimal among all possible parameter choices.
The constraint satisfaction problem (CSP) on a finite relational structure
B
is to decide, given a set of constraints on variables where the relations come from
B
, whether or not there is an ...assignment to the variables satisfying all of the constraints; the surjective CSP is the variant where one decides the existence of a surjective satisfying assignment onto the universe of
B
. We present an algebraic framework for proving hardness results on surjective CSPs; essentially, this framework computes global gadgetry that permits one to present a reduction from a classical CSP to a surjective CSP. We show how to derive a number of hardness results for surjective CSP in this framework, including the hardness of the disconnected cut problem, of the no-rainbow three-coloring problem, and of the surjective CSP on all two-element structures known to be intractable (in this setting). Our framework thus allows us to unify these hardness results and reveal common structure among them; we believe that our hardness proof for the disconnected cut problem is more succinct than the original. In our view, the framework also makes very transparent a way in which classical CSPs can be reduced to surjective CSPs.
Computing a (short) path between two vertices is one of the most fundamental primitives in graph algorithmics. In recent years, the study of paths in temporal graphs, that is, graphs where the vertex ...set is fixed but the edge set changes over time, gained more and more attention. A path is time-respecting, or
temporal
, if it uses edges with non-decreasing time stamps. We investigate a basic constraint for temporal paths, where the time spent at each vertex must not exceed a given duration
Δ
, referred to as
Δ
-
restless temporal paths
. This constraint arises naturally in the modeling of real-world processes like packet routing in communication networks and infection transmission routes of diseases where recovery confers lasting resistance. While finding temporal paths without waiting time restrictions is known to be doable in polynomial time, we show that the “restless variant” of this problem becomes computationally hard even in very restrictive settings. For example, it is W1-hard when parameterized by the
distance to disjoint path
of the underlying graph, which implies W1-hardness for many other parameters like feedback vertex number and pathwidth. A natural question is thus whether the problem becomes tractable in some natural settings. We explore several natural parameterizations, presenting FPT algorithms for three
kinds
of parameters: (1) output-related parameters (here, the maximum length of the path), (2) classical parameters applied to the underlying graph (e.g., feedback edge number), and (3) a new parameter called
timed feedback vertex number
, which captures finer-grained temporal features of the input temporal graph, and which may be of interest beyond this work.
Intelligent transportation (e.g., intelligent traffic light) makes our travel more convenient and efficient. With the development of mobile Internet and position technologies, it is reasonable to ...collect spatio-temporal data and then leverage these data to achieve the goal of intelligent transportation, and here, traffic prediction plays an important role. In this paper, we provide a comprehensive survey on traffic prediction, which is from the spatio-temporal data layer to the intelligent transportation application layer. At first, we split the whole research scope into four parts from bottom to up, where the four parts are, respectively, spatio-temporal data, preprocessing, traffic prediction and traffic application. Later, we review existing work on the four parts. First, we summarize traffic data into five types according to their difference on spatial and temporal dimensions. Second, we focus on four significant data preprocessing techniques: map-matching, data cleaning, data storage and data compression. Third, we focus on three kinds of traffic prediction problems (i.e., classification, generation and estimation/forecasting). In particular, we summarize the challenges and discuss how existing methods address these challenges. Fourth, we list five typical traffic applications. Lastly, we provide emerging research challenges and opportunities. We believe that the survey can help the partitioners to understand existing traffic prediction problems and methods, which can further encourage them to solve their intelligent transportation applications.
We introduce multiplicative drift analysis as a suitable way to analyze the runtime of randomized search heuristics such as evolutionary algorithms. Our multiplicative version of the classical drift ...theorem allows easier analyses in the often encountered situation that the optimization progress is roughly proportional to the current distance to the optimum.
To display the strength of this tool, we regard the classical problem of how the (1+1) Evolutionary Algorithm optimizes an arbitrary linear pseudo-Boolean function. Here, we first give a relatively simple proof for the fact that any linear function is optimized in expected time
O
(
n
log
n
), where
n
is the length of the bit string. Afterwards, we show that in fact any such function is optimized in expected time at most (1+
o
(1))1.39e
n
ln
n
, again using multiplicative drift analysis. We also prove a corresponding lower bound of (1−
o
(1))e
n
ln
n
which actually holds for all functions with a unique global optimum.
We further demonstrate how our drift theorem immediately gives natural proofs (with better constants) for the best known runtime bounds for the (1+1) Evolutionary Algorithm on combinatorial problems like finding minimum spanning trees, shortest paths, or Euler tours in graphs.
In this work we consider
temporal networks
, i.e. networks defined by a
labeling
λ
assigning to each edge of an
underlying graph
G
a set of
discrete
time-labels. The labels of an edge, which are ...natural numbers, indicate the discrete time moments at which the edge is available. We focus on
path problems
of temporal networks. In particular, we consider
time-respecting
paths, i.e. paths whose edges are assigned by
λ
a strictly increasing sequence of labels. We begin by giving two efficient algorithms for computing shortest time-respecting paths on a temporal network. We then prove that there is a
natural analogue of Menger’s theorem
holding for arbitrary temporal networks. Finally, we propose two
cost minimization parameters
for temporal network design. One is the
temporality
of
G
, in which the goal is to minimize the maximum number of labels of an edge, and the other is the
temporal cost
of
G
, in which the goal is to minimize the total number of labels used. Optimization of these parameters is performed subject to some
connectivity constraint
. We prove several lower and upper bounds for the temporality and the temporal cost of some very basic graph families such as rings, directed acyclic graphs, and trees.
In the first and so far only mathematical runtime analysis of an estimation-of-distribution algorithm (EDA) on a multimodal problem, Hasenöhrl and Sutton (GECCO 2018) showed for any
k
=
o
(
n
)
that ...the compact genetic algorithm (cGA) with any hypothetical population size
μ
=
Ω
(
n
e
4
k
+
n
3.5
+
ε
)
with high probability finds the optimum of the
n
-dimensional jump function with jump size
k
in time
O
(
μ
n
1.5
log
n
)
. We significantly improve this result for small jump sizes
k
≤
1
20
ln
n
-
1
. In this case, already for
μ
=
Ω
(
n
log
n
)
∩
poly
(
n
)
the runtime of the cGA with high probability is only
O
(
μ
n
)
. For the smallest admissible values of
μ
, our result gives a runtime of
O
(
n
log
n
)
, whereas the previous one only shows
O
(
n
5
+
ε
)
. Since it is known that the cGA with high probability needs at least
Ω
(
μ
n
)
iterations to optimize the unimodal
O
N
E
M
A
X
function, our result shows that the cGA in contrast to most classic evolutionary algorithms here is able to cross moderate-sized valleys of low fitness at no extra cost. For large
k
, we show that the exponential (in
k
) runtime guarantee of Hasenöhrl and Sutton is tight and cannot be improved, also not by using a smaller hypothetical population size. We prove that any choice of the hypothetical population size leads to a runtime that, with high probability, is at least exponential in the jump size
k
. This result might be the first non-trivial exponential lower bound for EDAs that holds for arbitrary parameter settings. To complete the picture, we show that the cGA with hypothetical population size
μ
=
Ω
(
log
n
)
with high probability needs
Ω
(
μ
n
+
n
log
n
)
iterations to optimize any
n
-dimensional jump function. This bound was known for
OneMax
, but, as we also show, the usual domination arguments do not allow to extend lower bounds on the performance of the cGA on
OneMax
to arbitrary functions with unique optimum. As a side result, we provide a simple general method based on parallel runs that, under mild conditions, (1) overcomes the need to specify a suitable population size and still gives a performance close to the one stemming from the best-possible population size, and (2) transforms EDAs with high-probability performance guarantees into EDAs with similar bounds on the expected runtime.
We study the parameterized complexity of the Bounded-Degree Vertex Deletion problem (BDD), where the aim is to find a maximum induced subgraph whose maximum degree is below a given degree bound. Our ...focus lies on parameters that measure the structural properties of the input instance. We first show that the problem is W1-hard parameterized by a wide range of fairly restrictive structural parameters such as the feedback vertex set number, pathwidth, treedepth, and even the size of a minimum vertex deletion set into graphs of pathwidth and treedepth at most three. We thereby resolve an open question stated in Betzler, Bredereck, Niedermeier and Uhlmann (2012) concerning the complexity of BDD parameterized by the feedback vertex set number. On the positive side, we obtain fixed-parameter algorithms for the problem with respect to the decompositional parameter treecut width and a novel problem-specific parameter called the core fracture number.
The heavy-tailed mutation operator proposed in Doerr et al. (GECCO 2017), called
fast mutation
to agree with the previously used language, so far was proven to be advantageous only in mutation-based ...algorithms. There, it can relieve the algorithm designer from finding the optimal mutation rate and nevertheless obtain a performance close to the one that the optimal mutation rate gives. In this first runtime analysis of a crossover-based algorithm using a heavy-tailed choice of the mutation rate, we show an even stronger impact. For the
(
1
+
(
λ
,
λ
)
)
genetic algorithm optimizing the
OneMax
benchmark function, we show that with a heavy-tailed mutation rate a linear runtime can be achieved. This is asymptotically faster than what can be obtained with any static mutation rate, and is asymptotically equivalent to the runtime of the self-adjusting version of the parameters choice of the
(
1
+
(
λ
,
λ
)
)
genetic algorithm. This result is complemented by an empirical study which shows the effectiveness of the fast mutation also on random satisfiable
MAX-3SAT
instances.
Motivated by placement of jobs in physical machines, we introduce and analyze the problem of online recoloring, or online disengagement. In this problem, we are given a set of
n
weighted vertices and ...a
k
-coloring of the vertices (vertices represent jobs, and colors represent physical machines). Edges, representing conflicts between jobs, are inserted in an online fashion. After every edge insertion, the algorithm must output a proper
k
-coloring of the vertices. The cost of recoloring a vertex is the vertex’s weight. Our aim is to minimize the competitive ratio of the algorithm, i.e., the ratio between the cost paid by the online algorithm and the cost paid by an optimal, offline algorithm. We consider a couple of polynomially-solvable coloring variants. Specifically, for 2-coloring bipartite graphs we present an
O
(
log
n
)
-competitive deterministic algorithm and an
Ω
(
log
n
)
lower bound on the competitive ratio of randomized algorithms. For
(
Δ
+
1
)
-coloring, where
Δ
is the maximal node degree, we present tight bounds of
Θ
(
Δ
)
and
Θ
(
log
Δ
)
on the competitive ratios of deterministic and randomized algorithms, respectively (where
Δ
denotes the maximum degree). We also consider the fully dynamic case which allows edge deletions as well as insertions. All our algorithms are applicable to the case where vertices are arbitrarily weighted, and all our lower bounds hold even in the uniform weights (unweighted) case.