Gold introduced the notion of learning in the limit where a class
S is learnable iff there is a recursive machine
M which reads the course of values of a function f and converges to a program for
f ...whenever
f is in
S. An important measure for the speed of convergence in this model is the quantity of mind changes before the onset of convergence. The oldest model is to consider a constant bound on the number of mind changes
M makes on any input function; such a bound is referred here as type 1. Later this was generalized to a bound of type 2 where a counter ranges over constructive ordinals and is counted down at every mind change. Although ordinal bounds permit the inference of richer concept classes than constant bounds, they still are a severe restriction. Therefore the present work introduces two more general approaches to bounding mind changes. These are based on counting by going down in a linearly ordered set (type 3) and on counting by going down in a partially ordered set (type 4). In both cases the set must not contain infinite descending recursive sequences. These four types of mind changes yield a hierarchy and there are identifiable classes that cannot be learned with the most general mind change bound of type 4. It is shown that existence of type 2 bound is equivalent to the existence of a learning algorithm which converges on every (also nonrecursive) input function and the existence of type 4 is shown to be equivalent to the existence of a learning algorithm which converges on every recursive function. A partial characterization of type 3 yields a result of independent interest in recursion theory. The interplay between mind change complexity and choice of hypothesis space is investigated. It is established that for certain concept classes, a more expressive hypothesis space can sometimes reduce mind change complexity of learning these classes. The notion of mind change bound for behaviourally correct learning is indirectly addressed by employing the above four types to restrict the number of predictive errors of commission in finite error next value learning (NV′′)—a model equivalent to behaviourally correct learning. Again, natural characterizations for type 2 and type 4 bounds are derived. Their naturalness is further illustrated by characterizing them in terms of branches of uniformly recursive families of binary trees.
In this paper we consider learnability in some special numberings, such as Friedberg numberings, which contain all the recursively enumerable languages, but have simpler grammar equivalence problem ...compared to acceptable numberings. We show that every explanatorily learnable class can be learnt in some Friedberg numbering. However, such a result does not hold for behaviourally correct learning or finite learning. One can also show that some Friedberg numberings are so restrictive that all classes which can be explanatorily learnt in such Friedberg numberings have only finitely many infinite languages. We also study similar questions for several properties of learners such as consistency, conservativeness, prudence, iterativeness and non U-shaped learning. Besides Friedberg numberings, we also consider the above problems for programming systems with K-recursive grammar equivalence problem.
We report a case of disseminated neuroblastoma (NB) causing epidural spinal cord compression in a 67-year-old woman. Because NB is primarily a tumor of infancy and childhood, less is known about its ...clinical course and optimal treatment in adults. This patient was treated with a thoracic laminectomy and tumor resection; polychemotherapy with one cycle of vindesine, cisplatin, and etoposide; one cycle of vincristine, dacarbazine, ifosfamide, and doxorubicin; and radiotherapy to the spine. She remained able to walk but died 8.5 months later of diffuse systemic tumor progression.
Inductive inference considers two types of queries: Queries to a teacher about the function to be learned and queries to a non-recursive oracle. This paper combines these two types — it considers ...three basic models of queries to a teacher (QEXSucc, QEX< and QEX+) together with membership queries to some oracle.
The results for each of these three models of query-inference are the same: If an oracle is omniscient for query-inference then it is already omniscient for EX. There is an oracle of trivial EX-degree, which allows nontrivial query-inference. Furthermore, queries to a teacher cannot overcome differences between oracles and the query-inference degrees are a proper refinement of the EX-degrees.
In the case of finite learning, the query-inference degrees coincide with the Turing degrees. Furthermore oracles can not close the gap between the different types of queries to a teacher.
Concept drift means that the concept about which data is obtained may shift from time to time, each time after some minimum permanence. Except for this minimum permanence, the concept shifts may not ...have to satisfy any further requirements and may occur infinitely often. Within this work is studied to what extent it is still possible to predict or learn values for a data sequence produced by drifting concepts. Various ways to measure the quality of such predictions, including martingale betting strategies and density and frequency of correctness, are introduced and compared with one another.
For each of these measures of prediction quality, for some interesting concrete classes, (nearly) optimal bounds on permanence for attaining learnability are established. The concrete classes, from which the drifting concepts are selected, include regular languages accepted by finite automata of bounded size, polynomials of bounded degree, and sequences defined by recurrence relations of bounded size. Some important, restricted cases of drifts are also studied, for example, the case where the intervals of permanence are computable. In the case where the concepts shift only among finitely many possibilities from certain infinite, arguably practical classes, the learning algorithms can be considerably improved.
Primary Spinal Marginal Zone Lymphoma Ahmadi, Sebastian A; Frank, Stephan; Hänggi, Daniel ...
Neurosurgery,
08/2012, Volume:
71, Issue:
2
Journal Article
Peer reviewed
Abstract
BACKGROUND AND IMPORTANCE:
Marginal zone lymphoma (MZL) describes a heterogeneous group of indolent B-cell lymphomas. The World Health Organization recognizes 3 types of MZLs: splenic MZL, ...nodal MZL, and extranodal MZL of mucosa-associated lymphoid tissue. There is no consensus on the optimal adjuvant treatment modalities for intracranial primary MZLs. To date, no case of spinal primary MZL has been reported.
CLINICAL PRESENTATION:
We present the first case of spinal MZL diagnosed in a 65-year-old man with progressive paraparesis. He underwent surgical removal of the main spinal tumor mass, which extended epidurally from vertebral body T3 to T7. Surgery was followed by 10 sessions of local irradiation for a total dose of 31 Gy. On long-term follow-up in 2010, the patient was in good health without any signs of residual or recurrent disease. Twenty-seven publications reporting on 61 cases of intracranial primary MZL were identified and reviewed. In the majority of cases of marginal zone B-cell lymphoma, adjuvant radiotherapy was used, with some combining radiotherapy and chemotherapy after surgical removal of the bulk of the main tumor. Long-term follow-up in most patients showed no evidence of disease and clinical well-being years after the initial diagnosis.
CONCLUSION:
Chemotherapy and/or radiation have been used in larger case series. Although there is no defined treatment guideline for this rare disease entity, our review of the literature suggests a favorable prognosis when combining surgical and adjuvant radiotherapy approaches.
A one-sided classifier for a given class of languages converges to 1 on every language from the class and outputs 0 infinitely often on languages outside the class. A two-sided classifier, on the ...other hand, converges to 1 on languages from the class and converges to 0 on languages outside the class. The present paper investigates one-sided and two-sided classification for classes of recursive languages. Theorems are presented that help assess the classifiability of natural classes. The relationships of classification to inductive learning theory and to structural complexity theory in terms of Turing degrees are studied. Furthermore, the special case of classification from only positive data is also investigated.
A \(\Pi^{0}_{1}\) class \(P\) is thin if every \(\Pi^{0}_{1}\) subclass \(Q\) of \(P\) is the intersection of \(P\) with some clopen set. In 1993, Cenzer, Downey, Jockusch and Shore initiated the ...study of Turing degrees of members of thin \(\Pi^{0}_{1}\) classes, and proved that degrees containing no members of thin \(\Pi^{0}_{1}\) classes can be recursively enumerable, and can be minimal degree below {\bf 0}\('\). In this paper, we work on this topic in terms of genericity, and prove that all 2-generic degrees contain no members of thin \(\Pi^{0}_{1}\) classes. In contrast to this, we show that all 1-generic degrees below {\bf 0}\('\) contain members of thin \(\Pi^{0}_{1}\) classes.
Provider: - Institution: - Data provided by Europeana Collections- All metadata published by Europeana are available free of restriction under the Creative Commons CC0 1.0 Universal Public Domain ...Dedication. However, Europeana requests that you actively acknowledge and give attribution to all metadata sources including Europeana