Goal or intent recognition, where one agent recognizes the goals or intentions of another, can be a powerful tool for effective teamwork and improving interaction between agents. Such reasoning can ...be challenging to perform, however, because observations of an agent can be unreliable and, often, an agent does not have access to the reasoning processes and mental models of the other agent. Despite this difficulty, recent work has made great strides in addressing these challenges. In particular, two Artificial Intelligence (AI)-based approaches to goal recognition have recently been shown to perform well: goal recognition as planning, which reduces a goal recognition problem to the problem of plan generation; and Combinatory Categorical Grammars (CCGs), which treat goal recognition as a parsing problem. Additionally, new advances in cognitive science with respect to Theory of Mind reasoning have yielded an approach to goal recognition that leverages analogy in its decision making. However, there is still much unknown about the potential and limitations of these approaches, especially with respect to one another. Here, we present an extension of the analogical approach to a novel algorithm, Refinement via Analogy for Goal Reasoning (RAGeR). We compare RAGeR to two state-of-the-art approaches which use planning and CCGs for goal recognition, respectively, along two different axes:
of observations and
of the other agent's mental model. Overall, we show that no approach dominates across all cases and discuss the relative strengths and weaknesses of these approaches. Scientists interested in goal recognition problems can use this knowledge as a guide to select the correct starting point for their specific domains and tasks.
μCCG, a CCG-based Game-Playing Agent for μRTS Kantharaju, Pavan; Ontanon, Santiago; Geib, Christopher W.
2018 IEEE Conference on Computational Intelligence and Games (CIG),
2018-Aug.
Conference Proceeding
This paper presents a Combinatory Categorial Grammar-based game playing agent called μCCG for the Real-Time Strategy testbed μRTS. The key problem that μCCG tries to address is that of adversarial ...planning in the very large search space of RTS games. In order to address this problem, we present a new hierarchical adversarial planning algorithm based on Combinatory Categorial Grammars (CCGs). The grammar used by our planner is automatically learned from sequences of actions taken from game replay data. We provide an empirical analysis of our agent against agents from the CIG 2017 μRTS competition using competition rules. μCCG represents the first complete agent to use a learned formal grammar representation of plans to adversarially plan in RTS games.
Towards a DRS Parsing Framework for French Le, Luyen Ngoc; Haralambous, Yannis; Lenca, Philippe
2019 Sixth International Conference on Social Networks Analysis, Management and Security (SNAMS),
2019-Oct.
Conference Proceeding
Open access
Combinatory Categorial Grammars provide a transparent interface between surface syntax and underlying semantic representation. Discourse Representation Theory allows the handling of meaning across ...sentence boundaries. Based on the foundations of these two theories along with the work of Johan Bos on the Boxer framework for English language, we propose an approach to the task of semantic parsing with Discourse Representation Structure for the French language. By giving an example of discourse analysis on French sentences and experimenting on 4,525 sentences taken from the French Treebank corpus, we demonstrate and evaluate the outcomes of our framework.
In real-time sentence comprehension, the comprehender is often required to establish syntactic dependencies between words that are linearly distant. Major models of sentence comprehension assume that ...longer dependencies are more difficult to process because of working memory limitations. While the expected effect of distance on reading times (locality effect) has been robustly observed in certain constructions, such as relative clauses in English, its generalizability to a wider range of constructions has been empirically questioned. The current study proposes a new metric of syntactic distance that capitalizes on the flexible constituency of Combinatory Categorial Grammar (CCG), and argues that it offers a unified account of the locality effects. It is shown that this metric correctly predicts both the presence of the locality effect in English relative clauses and its absence in verb-final languages, without assuming language- or dependency-specific differences in the sensitivity to the locality effect. It is further shown that the CCG-based distance is a significant predictor of the self-paced reading times from an English corpus, even when other known predictors such as dependency-based locality and surprisal are taken into account. These results suggest that human sentence comprehension involves rapid integration of input words into efficiently compressed syntactic representations, and CCG is a plausible theory of the grammar that subserves this process.
Sequence labeling is the widely used method for CCG supertagging task where a supertag (lexical category) is assigned to each word in an input sentence. In CCG supertagging the major challenging ...problem is due to the large number of lexical categories. To address this, machine learning and deep learning methods have been used and achieved promising results. However, these models whether use many hand-crafted features case of machine learning methods or use sentence level representation processing a sequence without any correlations between labels in neighborhoods which have great influences on predicting the current label case of deep learning models. More recently, there is a marriage of machine learning and deep learning models. In this paper, we use the combination of Conditional Random Field and Bidirectional Long Short-Term Memory models. So first the model learns sentence representation where we can gain from both past and future input features thanks to Bidirectional Long Short-Term Memory Networks architecture. Afterward, the model uses sentence level tag information thanks to Conditional Random Field model. By combining Bidirectional Long Short-Term Memory and Conditional Random Field (BLSTM-CRF) models, we evaluate our model on in-domain and out-of-domain datasets, and in both cases achieve (or close to) state-of-the-art results on CCG supertagging task.
The generative capacity of combinatory categorial grammars (CCGs) as generators of tree languages is investigated. It is demonstrated that the tree languages generated by CCGs can also be generated ...by simple monadic context-free tree grammars. However, the important subclass of pure combinatory categorial grammars cannot even generate all regular tree languages. Additionally, the tree languages generated by combinatory categorial grammars with limited rule degrees are characterized: If only application rules are allowed, then these grammars can generate only a proper subset of the regular tree languages, whereas they can generate exactly the regular tree languages once first-degree composition rules are permitted.
Combinatory Categorial Grammar (CCG) is an extension of categorial grammar that is well-established in computational linguistics. It is mildly context-sensitive, so it is efficiently parsable and ...reaches an expressiveness that is suitable for describing natural languages. Weighted CCG (wCCG) are introduced as a natural extension of CCG with weights taken from an arbitrary commutative semiring. Their expressive power is compared to other weighted formalisms with special emphasis on the weighted forests generated by wCCG since the ability to express the underlying syntactic structure of an input sentence is a vital feature of CCG in the area of natural language processing. Building on recent results for the expressivity in the unweighted setting, the corresponding results are derived for the weighted setting for any commutative semiring. More precisely, the weighted forests generatable by wCCG are also generatable by weighted simple monadic context-free tree grammar (wsCFTG). If the rule system is restricted to application rules and composition rules of first degree, then the generatable weighted forests are exactly the regular weighted forests. Finally, when only application rules are allowed, then a proper subset of the regular weighted forests is generatable.
This article proposes that the possible word orders for any natural language construction composed of n elements, each of which selects for the category headed by the next, are universally limited ...both across and within languages to a subclass of permutations on the 'universal order of command' 1, … , n, as determined by their selectional restrictions. The permitted subclass is known as the 'separable' permutations, and grows in n as the large Schröder series {1, 2, 6, 22, 90, 394, 1806, … }. This universal is identified as formal because it follows directly from the assumptions of combinatory categorial grammar (CCG)-in particular, from the fact that all CCG syntactic rules are subject to a combinatory projection principle that limits them to binary rules applying to contiguous nonempty categories. The article presents quantitative empirical evidence in support of this claim from the linguistically attested orders of the four elements Dem(onstrative), Num(erator), A(djective), N(oun), which have been examined in connection with various versions of Greenberg's putative 20th universal concerning their order. A universal restriction to separable permutation is also supported by word-order variation in the Germanic verb cluster and in the Hungarian verb complex, among other constructions.
This paper presents a computational framework for Natural Language Inference (NLI) using logic-based semantic representations and theorem-proving. We focus on logical inferences with comparatives and ...other related constructions in English, which are known for their structural complexity and difficulty in performing efficient reasoning. Using the so-called A-not-A analysis of comparatives, we implement a fully automated system to map various comparative constructions to semantic representations in typed first-order logic via Combinatory Categorial Grammar parsers and to prove entailment relations via a theorem prover. We evaluate the system on a variety of NLI benchmarks that contain challenging inferences, in comparison with other recent logic-based systems and neural NLI models.
This article proposes a syntax and a semantics for intonation in English and some related languages. The semantics is 'surface-compositional', in the sense that syntactic derivation constructs ...information-structural logical form monotonically, without rules of structural revision, and without autonomous rules of 'focus projection'. This is made possible by the generalized notion of syntactic constituency afforded by combinatory categorial grammar (CCG)—in particular, the fact that its rules are restricted to string-adjacent type-driven combination. In this way, the grammar unites intonation structure and information structure with surface-syntactic derivational structure and Montague-style compositional semantics, even when they deviate radically from traditional surface structure. The article revises and extends earlier CCG-based accounts of intonational semantics, grounding hitherto informal notions like 'theme' and 'rheme' (a.k.a. 'topic' and 'comment', 'presupposition' and 'focus', etc.) and 'background' and 'contrast' (a.k.a. 'given' and 'new', 'focus', etc.) in a logic of speaker/hearer supposition and update, using a version of Rooth's alternative semantics. A CCG grammar fragment is defined that constrains language-specific intonation and its interpretation more narrowly than previous attempts.