In this paper, we discuss the problems of finding minimum stopping sets and trapping sets in Tanner graphs, using integer linear programming. These problems are important for establishing reliable ...communication across noisy channels. Indeed, stopping sets and trapping sets correspond to combinatorial structures in information-theoretic codes, which lead to errors in decoding once a message is received. In particular, these sets correspond to variables that are not eventually corrected by the decoder, thus causing failures in decoding when using iterative algorithms. We present integer linear programs (ILPs) for finding stopping sets and several trapping set variants. Furthermore, we prove that two of these trapping set problem variants are NP-hard for the first time. Additionally, we analyze these variants from the parameterized perspective. Finally, we model stopping set and trapping set problems as Integer Linear Programs (ILPs). The effectiveness of our approach is demonstrated by finding stopping sets of size up to 48 in the (4896,2474) Margulis code. This compares favorably to the current state-of-the-art, which finds stopping sets of size up to 26. For the trapping set problems, we show for which cases an ILP produces solutions efficiently and for which cases it performs poorly. The proposed approach is applicable to codes represented by regular and irregular graphs alike.1
This paper is concerned with a new variant of bin packing called priority-based bin packing with subset constraints (PBBP-SC). PBBP-SC is a variant of traditional bin packing (TBP). In a TBP ...instance, we are given a collection of items {t1,t2,…tn}, where each item ti has size si∈(0,1). The goal of TBP is to pack the items in as few unit-sized bins as possible. In a PBBP-SC instance, we are given a collection of n unit-size items and a collection of m bins of varying capacities. Associated with each item is a positive integer which is called its priority. The priority of an item indicates its importance in being included in a (possibly infeasible) packing. As with the traditional case, these items need to be packed in the fewest number of bins. What complicates the problem is the fact that each item can be assigned to only one of a select set of bins, i.e., the bins are not interchangeable. We investigate several problems associated with PBBP-SC. The first problem we study is the feasibility problem. This is the problem of checking if there is a feasible assignment to a given PBBP-SC instance. The second problem we study is the priority maximization problem. This is the problem of finding a maximum priority assignment of an infeasible PBBP-SC instance. The third problem we study is the bin minimization problem. This is the problem of finding an assignment with the fewest number of bins to pack all of the items in a feasible PBBP-SC instance. We derive a number of results from both the algorithmic and computational complexity perspectives for these problems. In particular, we show that both the feasibility and priority maximization problems are solvable in polynomial time. We also show that the bin minimization problem is log-APX-complete and cannot be solved in time o(1.41m) unless the Strong Exponential Time Hypothesis fails.
In this paper, we investigate the problem of determining s−t reachability in choice networks. In the traditional s−t reachability problem, we are given a weighted network tuple G=〈V,E,c,s,t〉, with ...the goal of checking if there exists a path from s to t in G. In an optional choice network, we are given a choice set S⊆E×E, in addition to the network tuple G. In the s−t reachability problem in choice networks (OCRD), the goal is to find whether there exists a path from vertex s to vertex t, with the caveat that at most one edge from each edge-pair (x,y)∈S is used in the path. OCRD finds applications in a number of domains, including routing in wireless networks and sensor placement. We analyze the computational complexities of the OCRD problem and its variants from a number of algorithmic perspectives. We show that the problem is NP-complete in directed acyclic graphs with bounded pathwidth. Additionally, we show that its optimization version is NPO PB-complete. Additionally, we show that the problem is fixed-parameter tractable in the cardinality of the choice set S. In particular, we show that the problem can be solved in time O∗(1.42|S|). We also consider weighted versions of the OCRD problem and detail their computational complexities; in particular, the optimization version of the WOCRD problem is NPO-complete. While similar results have been obtained for related problems, our results improve on those results by providing stronger results or by providing results for more limited graph types.
El llamado "argumento maestro" de Berkeley, por el cual demuestra su principio esse est percipi, ha recibido crÃticas mixtas por parte de los comentaristas: algunos defienden su validez desde sus ...propias interpretaciones y otros lo acusan de falaz con base en diversas objeciones. El presente artÃculo defiende al argumento maestro de tres objeciones por parte de Russell, Pitcher y Tipton, las cuales son referidas como "objeción de la confusión entre el acto perceptivo y el objeto percibido", "objeción de la confusión entre el concepto del objeto y el objeto mismo" y " objeción del solipsismo del presente". El autor propone su propia lectura del argumento maestro para evitar malentendidos y sostiene que dicho argumento cobra su verdadero sentido a partir de los siguientes puntos: la aclaración de los conceptos berkeleyanos de idea y percepción; la explicitación de la intencionalidad de la percepción en el sentido de su dirección intencional hacia objetos intencionales (ideas); y la distinción entre los dos niveles de dirección intencional de la percepción mediata (de una idea por medio de otra), a saber: uno dirigido al concepto o representación mental como su objeto intencional inmediato y el otro dirigido al objeto representado como su objeto intencional mediato.
This survey paper delves into the emerging and critical area of symbolic knowledge distillation in Large Language Models (LLMs). As LLMs like Generative Pre-trained Transformer-3 (GPT-3) and ...Bidirectional Encoder Representations from Transformers (BERT) continue to expand in scale and complexity, the challenge of effectively harnessing their extensive knowledge becomes paramount. This survey concentrates on the process of distilling the intricate, often implicit knowledge contained within these models into a more symbolic, explicit form. This transformation is crucial for enhancing the interpretability, efficiency, and applicability of LLMs. We categorize the existing research based on methodologies and applications, focusing on how symbolic knowledge distillation can be used to improve the transparency and functionality of smaller, more efficient Artificial Intelligence (AI) models. The survey discusses the core challenges, including maintaining the depth of knowledge in a comprehensible format, and explores the various approaches and techniques that have been developed in this field. We identify gaps in current research and potential opportunities for future advancements. This survey aims to provide a comprehensive overview of symbolic knowledge distillation in LLMs, spotlighting its significance in the progression towards more accessible and efficient AI systems.
The success of machine learning solutions for reasoning about discrete structures has brought attention to its adoption within combinatorial optimization algorithms. Such approaches generally rely on ...supervised learning by leveraging datasets of the combinatorial structures of interest drawn from some distribution of problem instances. Reinforcement learning has also been employed to find such structures. In this paper, we propose a different approach in that no data is required for training the neural networks that produce the solution. In this sense, what we present is not a machine learning solution, but rather one that is dependent on neural networks and where backpropagation is applied to a loss function defined by the structure of the neural network architecture as opposed to a training dataset. In particular, we reduce the popular combinatorial optimization problem of finding a maximum independent set to a neural network and employ a dataless training scheme to refine the parameters of the network such that those parameters yield the structure of interest. Additionally, we propose a universal graph reduction procedure to handle large-scale graphs. The reduction exploits community detection for graph partitioning and is applicable to any graph type and/or density. Experimental results on both real and synthetic graphs demonstrate that our proposed method performs on par or outperforms state-of-the-art learning-based methods in terms of the size of the found set without requiring any training data.
Farkas Bounds on Horn Constraint Systems Subramani, K.; Wojciechowki, Piotr; Velasquez, Alvaro
Theory of computing systems,
04/2024, Letnik:
68, Številka:
2
Journal Article
Recenzirano
In this paper, we analyze the copy complexity of unsatisfiable Horn constraint systems, under the ADD refutation system. Recall that a linear constraint of the form
∑
i
=
1
n
a
i
·
x
i
≥
b
, is said ...to be a horn constraint if all the
a
i
∈
{
0
,
1
,
-
1
}
and at most one of the
a
i
s is positive. A conjunction of such constraints is called a Horn constraint system (HCS). Horn constraints arise in a number of domains including, but not limited to, program verification, power systems, econometrics, and operations research. The ADD refutation system is both
sound
and
complete
. Additionally, it is the simplest and most natural refutation system for refuting the feasibility of a system of linear constraints. The copy complexity of an infeasible linear constraint system (not necessarily Horn) in a refutation system, is the minimum number of times each constraint needs to be replicated, in order to obtain a read-once refutation. We show that for an HCS with
n
variables and
m
constraints, the copy complexity is at most
2
n
-
1
, in the ADD refutation system. Additionally, we analyze bounded-width HCSs from the perspective of copy complexity. Finally, we provide an empirical analysis of an integer programming formulation of the copy complexity problem in HCSs. (An extended abstract was published in FroCos 2021
26
.)
In this paper, we study the efficacy of several mathematical programming formulations for the single-source shortest path problem, the negative cost cycle detection problem, and the shortest negative ...cost cycle problem in arc-dependent networks. In an arc-dependent network, the cost of an arc
a
depends upon the arc preceding
a
. These networks differ from traditional networks in which the cost associated with an arc is a fixed constant and part of the input. Arc-dependent networks are useful for modeling a number of real-world problems, such as the turn-penalty shortest path problem, which cannot be captured in the traditional network setting. We present new integer and non-linear programming formulations for each problem. We also perform the first known empirical study for arc-dependent networks to contrast the execution times of the two formulations on a set of graphs with varying families and sizes. Our experiments indicate that although non-linear programming formulations are more compact, integer programming formulations are more efficient for the problems studied in this paper. Additionally, we introduce a number of cuts for each integer programming formulation and examine their effectiveness.
Compact in-memory circuits for implementing n-bit addition with varying degrees of destructive operations are presented. The non-destructive, semi-destructive, and fully-destructive adder variants ...are posed as bounded model checking procedures for design synthesis. The resulting designs are shown to be state-of-the-art in the number of execution steps required for computation when compared to other serial adders.