The recent FLIP cipher is an encryption scheme described by Méaux et al. at the conference EUROCRYPT 2016. It is based on a new stream cipher model called the filter permutator and tries to minimize ...some parameters (including the multiplicative depth). In the filter permutator, the input to the Boolean function has constant Hamming weight equal to the weight of the secret key. As a consequence, Boolean functions satisfying good cryptographic criteria when restricted to the set of vectors with constant Hamming weight play an important role in the FLIP stream cipher. Carlet et al. have shown that for Boolean functions with restricted input, balancedness and nonlinearity parameters continue to play an important role with respect to the corresponding attacks on the framework of FLIP ciphers. In particular, Boolean functions which are uniformly distributed over
F
2
on
E
n
,
k
=
{
x
∈
F
2
n
∣
wt
(
x
)
=
k
}
for every 0 <
k
<
n
are called weightwise perfectly balanced (WPB) functions, where wt(
x
) denotes the Hamming weight of
x
. In this paper, we firstly propose two methods of constructing weightwise perfectly balanced Boolean functions in 2
k
variables (where
k
is a positive integer) by modifying the support of linear and quadratic functions. Furthermore, we derive a construction of
n
-variable weightwise almost perfectly balanced Boolean functions for any positive integer
n
.
Boolean networks (BNs) are well-studied models of genomic regulation in biology where nodes are genes and their state transition is controlled by Boolean functions. We propose to learn Boolean ...functions as Boolean formulas in disjunctive normal form (DNFs) by an explainable neural network Mat_DNF and apply it to learning BNs. Directly expressing DNFs as a pair of binary matrices, we learn them using a single layer NN by minimizing a logically inspired non-negative cost function to zero. As a result, every parameter in the network has a clear meaning of representing a conjunction or literal in the learned DNF. Also we can prove that learning DNFs by the proposed approach is equivalent to inferring interpolants in logic between the positive and negative data. We applied our approach to learning three literature-curated BNs and confirmed its effectiveness. We also examine how generalization occurs when learning data is scarce. In doing so, we introduce two new operations that can improve accuracy, or equivalently generalizability for scarce data. The first one is to append a noise vector to the input learning vector. The second one is to continue learning even after learning error becomes zero. The first one is explainable by the second one. These two operations help us choose a learnable DNF, i.e., a root of the cost function, to achieve high generalizability.
Linear codes from simplicial complexes Chang, Seunghwan; Hyun, Jong Yoon
Designs, codes, and cryptography,
10/2018, Volume:
86, Issue:
10
Journal Article
Peer reviewed
In this article we introduce a method of constructing binary linear codes and computing their weights by means of Boolean functions arising from mathematical objects called simplicial complexes. ...Inspired by Adamaszek (Am Math Mon 122:367–370,
2015
) we introduce
n
-variable generating functions associated with simplicial complexes and derive explicit formulae. Applying the construction (Carlet in Finite Field Appl 13:121–135,
2007
; Wadayama in Des Codes Cryptogr 23:23–33,
2001
) of binary linear codes to Boolean functions arising from simplicial complexes, we obtain a class of optimal linear codes and a class of minimal linear codes.
Machine learning (ML) is ever more frequently used as a tool to aid decision-making. The need to understand the decisions made by ML algorithms has sparked a renewed interest in explainable ML ...models. A number of known models are often regarded as interpretable by human decision-makers with varying degrees of difficulty. The size of such models plays a crucial role in determining how easily they can be understood by a human. In this paper1 we propose the use of Binary Decision Diagrams (BDDs) as an interpretable ML model. BDDs can be deemed as interpretable as decision trees (DTs) while offering a often more compact representation due to node sharing. Fixed variable ordering also allows for more concise explanations. We propose a SAT-based approach for learning optimal BDDs that exhibit perfect accuracy on training data. We also explore heuristic methods for computing sub-optimal BDDs, in order to improve scalability.
DNA strand displacement reactions (SDRs) provide a set of intelligent toolboxes for developing molecular computation. Whereas SDR-based logic gate circuits have achieved a high level of complexity, ...the scale-up for practical achievable computational tasks remains a hurdle. Switching circuits that were originally proposed by Shannon in 1938 and nowadays widely used in telecommunication represent an alternative and efficient means to realize fast-speed and high-bandwidth communication. Here we develop SDR-based DNA switching circuits (DSCs) for implementing digital computing. Using a routing strategy on a programmable DNA switch canvas, we show that arbitrary Boolean functions can be represented by DSCs and implemented with molecular switches with high computing speed. We further demonstrate the implementation of full-adder and square-rooting functions using DSCs, which only uses down to 1/4 DNA strands as compared with a dual-rail logic expression-based design. We expect that DSCs provide a design paradigm for digital computation with biomolecules.
As a key component in many computer vision systems, optical flow estimation, especially with large displacements, remains an open problem. In this paper we present a simple but powerful matching ...method works in a coarse-to-fine scheme for optical flow estimation. Inspired by the nearest neighbor field (NNF) algorithms, our approach, called CPM (Coarse-to-fine PatchMatch), blends an efficient random search strategy with the coarse-to-fine scheme for optical flow problem. Unlike existing NNF techniques, which is efficient but the results is often too noisy for optical flow caused by the lack of global regularization, we propose a propagation step with constrained random search radius between adjacent levels on the hierarchical architecture. The resulting correspondences enjoys a built-in smoothing effect, which is more suited for optical flow estimation than NNF techniques. Furthermore, our approach can also capture the tiny structures with large motions which is a problem for traditional coarse-to-fine optical flow algorithms. Interpolated by an edge-preserving interpolation method (EpicFlow), our method outperforms the state of the art on MPI-Sintel and KITTI, and runs much faster than the competing methods.
Let <inline-formula> <tex-math notation="LaTeX">T_{\epsilon } </tex-math></inline-formula> be the noise operator acting on Boolean functions <inline-formula> <tex-math notation="LaTeX">f:\{0, ...1\}^{n}\to \{0, 1\} </tex-math></inline-formula>, where <inline-formula> <tex-math notation="LaTeX">\epsilon \in {0, 1/2} </tex-math></inline-formula> is the noise parameter. Given <inline-formula> <tex-math notation="LaTeX">\alpha >1 </tex-math></inline-formula> and fixed mean <inline-formula> <tex-math notation="LaTeX">\mathbb {E} f </tex-math></inline-formula>, which Boolean function <inline-formula> <tex-math notation="LaTeX">f </tex-math></inline-formula> has the largest <inline-formula> <tex-math notation="LaTeX">\alpha </tex-math></inline-formula>-th moment <inline-formula> <tex-math notation="LaTeX">\mathbb {E}(T_\epsilon f)^\alpha </tex-math></inline-formula>? This question has close connections with noise stability of Boolean functions, the problem of non-interactive correlation distillation, and Courtade-Kumar's conjecture on the most informative Boolean function. In this paper, we characterize maximizers in some extremal settings, such as low noise (<inline-formula> <tex-math notation="LaTeX">\epsilon =\epsilon (n) </tex-math></inline-formula> close to 0), high noise (<inline-formula> <tex-math notation="LaTeX">\epsilon =\epsilon (n) </tex-math></inline-formula> close to 1/2), as well as when <inline-formula> <tex-math notation="LaTeX">\alpha =\alpha (n) </tex-math></inline-formula> is large. Analogous results are also established in more general contexts, such as Boolean functions defined on discrete torus <inline-formula> <tex-math notation="LaTeX">(\mathbb {Z}/p \mathbb {Z})^{n} </tex-math></inline-formula> and the problem of noise stability in a tree model.
Silicon-based static random access memories (SRAM) and digital Boolean logic have been the workhorse of the state-of-the-art computing platforms. Despite tremendous strides in scaling the ubiquitous ...metal-oxide-semiconductor transistor, the underlying von-Neumann computing architecture has remained unchanged. The limited throughput and energy-efficiency of the state-of-the-art computing systems, to a large extent, result from the well-known von-Neumann bottleneck . The energy and throughput inefficiency of the von-Neumann machines have been accentuated in recent times due to the present emphasis on data-intensive applications such as artificial intelligence, machine learning, and cryptography. A possible approach towards mitigating the overhead associated with the von-Neumann bottleneck is to enable in-memory Boolean computations. In this paper, we present an augmented version of the conventional SRAM bit-cells, called the X-SRAM , with the ability to perform in-memory, vector Boolean computations, in addition to the usual memory storage operations. We propose at least six different schemes for enabling in-memory vector computations, including NAND, NOR, IMP (implication), XOR logic gates, with respect to different bit-cell topologies − the 8T cell and the 8 + T Differential cell. In addition, we also present a novel 'read-compute-store' scheme, wherein the computed Boolean function can be directly stored in the memory without the need of latching the data and carrying out a subsequent write operation. The feasibility of the proposed schemes has been verified using the predictive transistor models and detailed Monte-Carlo variation analysis. As an illustration, we also present the efficacy of the proposed in-memory computations by implementing advanced encryption standard algorithm on a non-standard von-Neumann machine wherein the conventional SRAM is replaced by X-SRAM. Our simulations indicated that up to 75% of memory accesses can be saved using the proposed techniques.