A partial learner in the limit 25, given a representation of the target language (a text), outputs a sequence of conjectures, where one correct conjecture appears infinitely many times and other ...conjectures each appear a finite number of times. Following 7 and 21, we define intrinsic complexity of partial learning, based on reducibilities between learning problems. Although the whole class of recursively enumerable languages is partially learnable (see 25) and, thus, belongs to the complete learnability degree, we discovered a rich structure of incomplete degrees, reflecting different types of learning strategies (based, to some extent, on topological structures of the target language classes). We also exhibit examples of complete classes that illuminate the character of the strategies for partial learning of the hardest classes.
We introduce and explore a model for parallel learning of families of languages computable by finite automata. In this model, an algorithmic or automatic learner takes on n different input languages ...and identifies at least m of them correctly. For finite parallel learning, for large enough families, we establish a full characterization of learnability in terms of characteristic samples of languages. Based on this characterization, we show that it is the difference n−m, the number of languages which are potentially not identified, which is crucial. Similar results are obtained also for parallel learning in the limit. We consider also parallel finite learnability by finite automata and obtain some partial results. A number of problems for automatic variant of parallel learning remain open.
Within the frameworks of learning in the limit of indexed classes of recursive languages from positive data and automatic learning in the limit of indexed classes of regular languages (with ...automatically computable sets of indices), we study the problem of minimizing the maximum number of mind changes FM(n) by a learner M on all languages with indices not exceeding n. For inductive inference of recursive languages, we establish two conditions under which FM(n) can be made smaller than any recursive unbounded non-decreasing function. We also establish how FM(n) is affected if at least one of these two conditions does not hold. In the case of automatic learning, some partial results addressing speeding up the function FM(n) are obtained.
We introduce and study a model for learning in the limit by finite automata from positive data and negative counterexamples. The focus is on learning classes of languages with the membership problem ...computable by finite automata (so-called automatic classes). We show that, within the framework of our model, finite automata (automatic learners) can learn all automatic classes when memory of a learner is restricted by the size of the longest datum seen so far. We also study capabilities of automatic learners in our model with other restrictions on the memory and how the choice of negative counterexamples (arbitrary, or least, or the ones which are bounded by the largest positive datum seen so far) can impact automatic learnability.
A variant of
iterative
learning in the limit (cf. Lange and Zeugmann
1996
) is studied when a learner gets negative examples refuting conjectures containing data in excess of the target language and ...uses additional information of the following four types: (a) memorizing up to
n
input elements seen so far; (b) up to
n
feedback
memberships queries (testing if an item is a member of the input seen so far); (c) the number of input elements seen so far; (d) the maximal element of the input seen so far. We explore how additional information available to such learners (defined and studied in Jain and Kinber
2007
) may help. In particular, we show that adding the maximal element or the number of elements seen so far helps such learners to infer any
indexed
class of languages
class-preservingly
(using a descriptive numbering defining the class)—as it is proved in Jain and Kinber (
2007
), this is not possible without using additional information. We also study how, in the given context, different types of additional information fare against each other, and establish hierarchies of learners memorizing
n
+1 versus
n
input elements seen and
n
+1 versus
n
feedback membership queries.
A learning algorithm is developed for a class of regular expressions equivalent to the class of all unionless unambiguous regular expressions of loop depth 2. The learner uses one representative ...example of the target language (where every occurrence of every loop in the target expression is unfolded at least twice) and a number of membership queries. The algorithm works in time polynomial in the length of the input example.
In this paper we introduce a paradigm for learning in the limit of potentially infinite languages from all positive data and negative counterexamples provided in response to the conjectures made by ...the learner. Several variants of this paradigm are considered that reflect different conditions/constraints on the type and size of negative counterexamples and on the time for obtaining them. In particular, we consider the models where (1) a learner gets the least negative counterexample; (2) the size of a negative counterexample must be bounded by the size of the positive data seen so far; (3) a counterexample can be delayed. Learning power, limitations of these models, relationships between them, as well as their relationships with classical paradigms for learning languages in the limit (without negative counterexamples) are explored. Several surprising results are obtained. In particular, for Gold's model of learning requiring a learner to syntactically stabilize on correct conjectures, learners getting negative counterexamples immediately turn out to be as powerful as the ones that do not get them for indefinitely (but finitely) long time (or are only told that their latest conjecture is not a subset of the target language, without any specific negative counterexample). Another result shows that for behaviorally correct learning (where semantic convergence is required from a learner) with negative counterexamples, a learner making just one error in almost all its conjectures has the “ultimate power”: it can learn the class of all recursively enumerable languages. Yet another result demonstrates that sometimes positive data and negative counterexamples provided by a teacher are not enough to compensate for full positive and negative data.
A number of natural models for learning in the limit are introduced to deal with the situation when a learner is required to provide a grammar covering the input even if only a part of the target ...language is available. Examples of language families are exhibited that are learnable in one model and not learnable in another one. Some characterizations for learnability of algorithmically enumerable families of languages for the models in question are obtained. Since learnability of any part of the target language does not imply
monotonicity of the learning process, we consider our models also under the additional monotonicity constraint.