We investigate encodings for modular arithmetic in the lambda-calculus. There are two approaches: adapting well-known numeral systems, and building a new one. This paper focuses on providing original ...techniques to encode modular arithmetic directly. We present a modular arithmetic numeral system complete with multiplication and an implementation of the Chinese remainder theorem, all without recursion i.e., without using fixed-point operators.
•Modular arithmetic has a simple implementation in the lambda-calculus.•The Chinese remainder theorem reduces computational complexity.•The Chinese remainder theorem has a simple implementation using pairs.
We investigate an unsuspected connection between logical connectives with non-harmonious deduction rules, such as Prior's tonk, and quantum computing. We argue that these connectives model the ...information-erasure, the non-reversibility, and the non-determinism that occur, among other places, in quantum measurement. We introduce an intuitionistic propositional logic with a non-harmonious logical connective sup and two interstitial rules, and show that the proof language of this logic forms the core of a quantum programming language.
•Introducing a propositional logic with non-harmonious rules and the ⊙ (read “sup”) connective.•The proof language of this logic forms core of a quantum programming language.•Non-harmonious rules of ⊙ model quantum properties of measurement: info-erasure, non-reversibility, non-determinism.
•We connect two different fields: differential program semantics and incremental computing.•Program distance is increasingly important for software analysis.•Differential logical relations are a new ...technique that measures the distance between programs as a third program.•Program derivates are used as a way to achieve incremental computation in higher-order languages.•Program derivates are canonical self-distances and differential logical relations can be used in incremental computing.
We study the deep relation existing between differential logical relations and incremental computing by showing how self-differences in the former precisely correspond to derivatives in the latter. The byproduct of such a relationship is twofold: on the one hand, we show how differential logical relations can be seen as a powerful meta-theoretical tool in the analysis of incremental computations, enabling an easy proof of soundness of differentiation. On the other hand, we generalize differential logical relations so as to be able to interpret full recursion, something not possible in the original system.
The provability problem in intuitionistic propositional second-order logic with existential quantifier and implication
$ (\exists,\to ) $
(
∃
,
→
)
is proved to be undecidable in presence of free ...type variables (constants). This contrasts with the result that inutitionistic propositional second-order logic with existential quantifier, conjunction and negation is decidable.
•PLFA is an online textbook, available at plfa.inf.ed.ac.uk.•The proofs for a textbook on programming languages can be written in Agda, without a separate scripting language.•Constructive proofs of ...preservation and progress give rise to a prototype evaluator.•Using raw terms with a separate typing relation is far less perspicuous than using inherently-typed terms.•An online final examination with access to a proof assistant can lead to flawless student performance.
One of the leading textbooks for formal methods is Software Foundations (SF), written by Benjamin Pierce in collaboration with others, and based on Coq. After five years using SF in the classroom, we came to the conclusion that Coq is not the best vehicle for this purpose, as too much of the course needs to focus on learning tactics for proof derivation, to the cost of learning programming language theory. Accordingly, we have written a new textbook, Programming Language Foundations in Agda (PLFA). PLFA covers much of the same ground as SF, although it is not a slavish imitation.
What did we learn from writing PLFA? First, that it is possible. One might expect that without proof tactics that the proofs become too long, but in fact proofs in PLFA are about the same length as those in SF. Proofs in Coq require an interactive environment to be understood, while proofs in Agda can be read on the page. Second, that constructive proofs of preservation and progress give immediate rise to a prototype evaluator. This fact is obvious in retrospect but it is not exploited in SF (which instead provides a separate normalise tactic) nor can we find it in the literature. Third, that using extrinsically-typed terms is far less perspicuous than using intrinsically-typed terms. SF uses the former presentation, while PLFA presents both; the former uses about 1.6 as many lines of Agda code as the latter, roughly the golden ratio.
The textbook is written as a literate Agda script, and can be found here: http://plfa.inf.ed.ac.uk
Life is confronted with computation problems in a variety of domains including animal behavior, single-cell behavior, and embryonic development. Yet we currently do not know of a naturally existing ...biological system that is capable of universal computation, i.e., Turing-equivalent in scope. Generic finite-dimensional dynamical systems (which encompass most models of neural networks, intracellular signaling cascades, and gene regulatory networks) fall short of universal computation, but are assumed to be capable of explaining cognition and development. I present a class of models that bridge two concepts from distant fields: combinatory logic (or, equivalently, lambda calculus) and RNA molecular biology. A set of basic RNA editing rules can make it possible to compute any computable function with identical algorithmic complexity to that of Turing machines. The models do not assume extraordinarily complex molecular machinery or any processes that radically differ from what we already know to occur in cells. Distinct independent enzymes can mediate each of the rules and RNA molecules solve the problem of parenthesis matching through their secondary structure. In the most plausible of these models all of the editing rules can be implemented with merely cleavage and ligation operations at fixed positions relative to predefined motifs. This demonstrates that universal computation is well within the reach of molecular biology. It is therefore reasonable to assume that life has evolved – or possibly began with – a universal computer that yet remains to be discovered. The variety of seemingly unrelated computational problems across many scales can potentially be solved using the same RNA-based computation system. Experimental validation of this theory may immensely impact our understanding of memory, cognition, development, disease, evolution, and the early stages of life.
Generalized numberings are an extension of Ershov's notion of numbering, based on partial combinatory algebra (pca) instead of the natural numbers. We study various algebraic properties of ...generalized numberings, relating properties of the numbering to properties of the pca. As in the lambda calculus, extensionality is a key notion here.
Adding Negation to Lambda Mu van Bakel, Steffen
Logical methods in computer science,
05/2023, Volume:
19, Issue 2
Journal Article
Peer reviewed
Open access
We present $\cal L$, an extension of Parigot's $\lambda\mu$-calculus by
adding negation as a type constructor, together with syntactic constructs that
represent negation introduction and elimination. ...We will define a notion of
reduction that extends $\lambda\mu$'s reduction system with two new reduction
rules, and show that the system satisfies subject reduction. Using Aczel's
generalisation of Tait and Martin-L\"of's notion of parallel reduction, we show
that this extended reduction is confluent. Although the notion of type
assignment has its limitations with respect to representation of proofs in
natural deduction with implication and negation, we will show that all
propositions that can be shown in there have a witness in $\cal L$. Using
Girard's approach of reducibility candidates, we show that all typeable terms
are strongly normalisable, and conclude the paper by showing that type
assignment for $\cal L$ enjoys the principal typing property.