We introduce the problem of constructing explicit
variety evasive subspace families
. Given a family
F
of subvarieties of a projective or affine space, a collection
H
of projective or affine
k
...-subspaces is
(
F
,
ϵ
)
-evasive if for every
V
∈
F
, all but at most
ϵ
-fraction of
W
∈
H
intersect every irreducible component of
V
with (at most) the expected dimension. The problem of constructing such an explicit subspace family generalizes both deterministic black-box polynomial identity testing (PIT) and the problem of constructing explicit (weak) lossless rank condensers.
Using Chow forms, we construct explicit
k
-subspace families of polynomial size that are evasive for all varieties of bounded degree in a projective or affine
n
-space. As one application, we obtain a complete derandomization of Noether’s normalization lemma for varieties of low degree in a projective or affine
n
-space. In another application, we obtain a simple polynomial-time black-box PIT algorithm for depth-4 arithmetic circuits with bounded top fan-in and bottom fan-in that are not in the Sylvester–Gallai configuration, improving and simplifying a result of Gupta (ECCC TR 14-130).
As a complement of our explicit construction, we prove a tight lower bound for the size of
k
-subspace families that are evasive for degree-
d
varieties in a projective
n
-space. When
n
-
k
=
n
Ω
(
1
)
, the lower bound is superpolynomial unless
d
is bounded. The proof uses a dimension counting argument on Chow varieties that parametrize projective subvarieties.
The combinatorial refinement techniques have proven to be an efficient approach to isomorphism testing for particular classes of graphs. If the number of refinement rounds is small, this puts the ...corresponding isomorphism problem in a low-complexity class. We investigate the round complexity of the two-dimensional Weisfeiler--Leman algorithm on circulant graphs, i.e., on Cayley graphs of the cyclic group
Z
n
, and prove that the number of rounds until stabilization is bounded by
O
(
d
(
n
)
log
n
)
, where
d
(
n
)
is the number of divisors of n. As a particular consequence, isomorphism can be tested in NC for connected circulant graphs of order
p
ℓ
with p an odd prime,
ℓ
>
3
and vertex degree
Δ
smaller than p. We also show that the color refinement method (also known as the one-dimensional Weisfeiler--Leman algorithm) computes a canonical labeling for every non-trivial circulant graph with a prime number of vertices after individualization of two appropriately chosen vertices. Thus, the canonical labeling problem for this class of graphs has at most the same complexity as color refinement, which results in a time bound of
O
(
Δ
n
log
n
)
. Moreover, this provides a first example where a sophisticated approach to isomorphism testing put forward by Tinhofer has a real practical meaning.
Although widely applied in optimisation, relatively little has been proven rigorously about the role and behaviour of populations in randomised search processes. This paper presents a new method to ...prove upper bounds on the expected optimisation time of population-based randomised search heuristics that use non-elitist selection mechanisms and unary variation operators. Our results follow from a detailed drift analysis of the population dynamics in these heuristics. This analysis shows that the optimisation time depends on the relationship between the strength of the selective pressure and the degree of variation introduced by the variation operator. Given limited variation, a surprisingly weak selective pressure suffices to optimise many functions in expected polynomial time. We derive upper bounds on the expected optimisation time of non-elitist evolutionary algorithms (EA) using various selection mechanisms, including fitness proportionate selection. We show that EAs using fitness proportionate selection can optimise standard benchmark functions in expected polynomial time given a sufficiently low mutation rate. As a second contribution, we consider an optimisation scenario with
partial information
, where fitness values of solutions are only partially available. We prove that non-elitist EAs under a set of specific conditions can optimise benchmark functions in expected polynomial time, even when vanishingly little information about the fitness values of individual solutions or populations is available. To our knowledge, this is the first runtime analysis of randomised search heuristics under partial information.
We investigate two types of graph layouts, track layouts and layered path decompositions, and the relations between their associated parameters track-number and layered pathwidth. We use these two ...types of layouts to characterize leveled planar graphs, which are the graphs with planar leveled drawings with no dummy vertices. It follows from the known NP-completeness of leveled planarity that track-number and layered pathwidth are also NP-complete, even for the smallest constant parameter values that make these parameters nontrivial. We prove that the graphs with bounded layered pathwidth include outerplanar graphs, Halin graphs, and squaregraphs, but that (despite having bounded track-number) series–parallel graphs do not have bounded layered pathwidth. Finally, we investigate the parameterized complexity of these layouts, showing that past methods used for book layouts do not work to parameterize the problem by treewidth or almost-tree number but that the problem is (non-uniformly) fixed-parameter tractable for tree-depth.
The advances of mobile equipment and localization techniques put forward the accuracy of the location-based service (LBS) in mobile networks. One core issue for the industry to exploit the economic ...interest of the LBSs is to make appropriate point-of-interest (POI) recommendation based on users’ interests. Today, the LBS applications expect the recommender systems to recommend the accurate next POI in an anonymous manner, without inquiring users’ attributes or knowing the detailed features of the vast number of POIs. To cope with the challenge, we propose a novel attentive model to recommend appropriate new POIs for users, namely Geographical Attentive Recommendation via Graph (GARG), which takes full advantage of the collaborative, sequential and content-aware information. Unlike previous strategies that equally treat POIs in the sequence or manually define the relationships between POIs, GARG adaptively differentiates the relevance of POIs in the sequence to the prediction, and automatically identifies the POI-wise correlation. Extensive experiments on three real-world datasets demonstrate the effectiveness of GARG and reveal a significant improvement by GARG on the precision, recall and mAP metrics, compared to several state-of-the-art baseline methods.
The
knapsack problem
is one of the classical problems in combinatorial optimization: Given a set of items, each specified by its size and profit, the goal is to find a maximum profit packing into a ...knapsack of bounded capacity. In the online setting, items are revealed one by one and the decision, if the current item is packed or discarded forever, must be done immediately and irrevocably upon arrival. We study the online variant in the random order model where the input sequence is a uniform random permutation of the item set. We develop a randomized (1/6.65)-competitive algorithm for this problem, outperforming the current best algorithm of competitive ratio 1/8.06 (Kesselheim et al. in SIAM J Comput 47(5):1939–1964, 2018). Our algorithm is based on two new insights: We introduce a novel algorithmic approach that employs two given algorithms, optimized for restricted item classes, sequentially on the input sequence. In addition, we study and exploit the relationship of the knapsack problem to the 2-secretary problem. The
generalized assignment problem
(GAP) includes, besides the knapsack problem, several important problems related to scheduling and matching. We show that in the same online setting, applying the proposed sequential approach yields a (1/6.99)-competitive randomized algorithm for GAP. Again, our proposed algorithm outperforms the current best result of competitive ratio 1/8.06 (Kesselheim et al. in SIAM J Comput 47(5):1939–1964, 2018).
The complexity theory for black-box algorithms, introduced by Droste, Jansen, and Wegener (Theory Comput. Syst. 39:525–544,
2006
), describes common limits on the efficiency of a broad class of ...randomised search heuristics. There is an obvious trade-off between the generality of the black-box model and the strength of the bounds that can be proven in such a model. In particular, the original black-box model provides for well-known benchmark problems relatively small lower bounds, which seem unrealistic in certain cases and are typically not met by popular search heuristics.
In this paper, we introduce a more restricted black-box model for optimisation of pseudo-Boolean functions which we claim captures the working principles of many randomised search heuristics including simulated annealing, evolutionary algorithms, randomised local search, and others. The key concept worked out is an unbiased variation operator. Considering this class of algorithms, significantly better lower bounds on the black-box complexity are proved, amongst them an Ω(
n
log
n
) bound for functions with unique optimum. Moreover, a simple unimodal function and plateau functions are considered. We show that a simple (1+1) EA is able to match the runtime bounds in several cases.
We present the first results on the parameterized complexity of reconfiguration problems, where a reconfiguration variant of an optimization problem
Q
takes as input two feasible solutions
S
and
T
...and determines if there is a sequence of reconfiguration steps, i.e. a reconfiguration sequence, that can be applied to transform
S
into
T
such that each step results in a feasible solution to
Q
. For most of the results in this paper,
S
and
T
are sets of vertices of a given graph and a reconfiguration step adds or removes a vertex. Our study is motivated by results establishing that for many
NP
-hard problems, the classical complexity of reconfiguration is
PSPACE
-complete. We address the question for several important graph properties under two natural parameterizations:
k
, a bound on the size of solutions, and
ℓ
, a bound on the length of reconfiguration sequences. Our first general result is an algorithmic paradigm, the reconfiguration kernel, used to obtain fixed-parameter tractable algorithms for reconfiguration variants of
Vertex Cover
and, more generally,
Bounded Hitting Set
and
Feedback Vertex Set
, all parameterized by
k
. In contrast, we show that reconfiguring
Unbounded Hitting Set
is
W2
-hard when parameterized by
k
+
ℓ
. We also demonstrate the
W1
-hardness of reconfiguration variants of a large class of maximization problems parameterized by
k
+
ℓ
, and of their corresponding deletion problems parameterized by
ℓ
; in doing so, we show that there exist problems in
FPT
when parameterized by
k
, but whose reconfiguration variants are
W1
-hard when parameterized by
k
+
ℓ
.
The well-known NP-complete problem
Matching Cut
is to decide if a graph has a matching that is also an edge cut of the graph. We prove new complexity results for
Matching Cut
restricted to
H
-free ...graphs, that is, graphs that do not contain some fixed graph
H
as an induced subgraph. We also prove new complexity results for two recently studied variants of
Matching Cut
, on
H
-free graphs. The first variant requires that the matching cut must be extendable to a perfect matching of the graph. The second variant requires the matching cut to be a perfect matching. In particular, we prove that there exists a small constant
r
>
0
such that the first variant is NP-complete for
P
r
-free graphs. This addresses a question of Bouquet and Picouleau (The complexity of the Perfect Matching-Cut problem. CoRR,
arXiv:2011.03318
, (2020)). For all three problems, we give state-of-the-art summaries of their computational complexity for
H
-free graphs.