Qualitative modeling allows autonomous agents to learn comprehensible control models, formulated in a way that is close to human intuition. By abstracting away certain numerical information, ...qualitative models can provide better insights into operating principles of a dynamic system in comparison to traditional numerical models. We show that qualitative models, learned from numerical traces, contain enough information to allow motion planning and path following. We demonstrate our methods on the task of flying a quadcopter. A qualitative control model is learned through motor babbling. Training is significantly faster than training times reported in papers using reinforcement learning with similar quadcopter experiments. A qualitative collision-free trajectory is computed by means of qualitative simulation, and executed reactively while dynamically adapting to numerical characteristics of the system. Experiments have been conducted and assessed in the V-REP robotic simulator.
In December 2017, the game playing program AlphaZero was reported to have learned in less than 24 hours to play each of the games of chess, Go and shogi better than any human, and better than any ...other existing specialised computer program for these games. This was achieved just by self-play, without access to any knowledge of these games other than the rules of the game. In this paper we consider some limitations to this spectacular success. The program was trained in well-defined and relatively small domains (admittedly with enormous combinatorial complexity) compared to many real world problems, and it was possible to generate large amounts of learning data through simulated games which is typically not possible in real life domains. When it comes to understanding the games played by AlphaZero, the program's inability to explain its games and the knowledge acquired in humanunderstandable terms is a serious limitation.
Data visualization plays a crucial role in identifying interesting patterns in exploratory data analysis. Its use is, however, made difficult by the large number of possible data projections showing ...different attribute subsets that must be evaluated by the data analyst. In this paper, we introduce a method called VizRank, which is applied on classified data to automatically select the most useful data projections. VizRank can be used with any visualization method that maps attribute values to points in a two-dimensional visualization space. It assesses possible data projections and ranks them by their ability to visually discriminate between classes. The quality of class separation is estimated by computing the predictive accuracy of k-nearest neighbor classifier on the data set consisting of x and y positions of the projected data points and their class information. The paper introduces the method and presents experimental results which show that VizRank's ranking of projections highly agrees with subjective rankings by data analysts. The practical use of VizRank is also demonstrated by an application in the field of functional genomics.
Teachers' readiness for technology integration depends also on their beliefs about the contribution of technology to teaching and learning, which influence their motivation for its adoption. Initial ...pre‐service teacher education is critical in reducing the attitude‐behaviour divide supporting technology acceptability, acceptance and use. Acceptance of interaction between human and robot is more complicated than human‐computer interaction acceptance. Social robots are radical innovations, harder for potential users to accept in human social spaces than are incremental innovations. In 2019, a survey using a convenience sample of 121 first‐year students was conducted to examine pre‐service teachers' beliefs about social robot educational technology. It examined the following factors derived from the Unified Theory of Acceptance and Use of Technology adopted for social robots in education: Perceived social dimension, Intention to use, Perceived usability, Anxiety. Based on our findings, it seems there is a critical disjunction between researchers' efforts to equip social robots with human manners and social intelligence and participants' rejection of this technology precisely because it mimics being human. Further, we report that ICT familiarity as assessed using PISA's Information Communication Technology—ICT familiarity factors is related to robot acceptability. These findings need further examination to inform educational robotics design and Human‐Robot Interaction research and teacher education and training.
Practitioners notes
What is already known about this topic
In the age of robotic technology, teachers face requirements to prepare students for work and life with social robots.
Social robots are tested for classroom integration.
Teachers' readiness to implement robot lessons depends on their beliefs about social robotic technology's contribution to teaching.
Research and development in the field of social robotics still tend to focus more frequently on technology applications rather than pedagogical issues and advancing teaching and learning.
What this paper adds
Participants refuse to accept the idea of social robot‐based instruction.
The identified belief pattern is based mainly on the perceived social dimension, intention to use, perceived usability and anxiety.
Participants critically perceive the robot's social dimension.
Some of PISA's ICT familiarity factors are related to robots acceptability factors.
Implication for policy and/or practice
The policy and practice need to address how social robots could be integrated into current teaching and learning practices and more importantly how could robotic technology facilitate innovative pedagogical models for effective and efficient learning.
The introduction of social robots should follow instructional design requirements and not merely technological advancement.
Teacher initial education has to provide social robotic learning environments for pre‐service teachers to experiment and design.
•Chess players do not have better cognitive abilities than other people.•Use multiple knowledge paradigm to define procedural chunks and STM capacity.•Efficiency of information processing depends on ...the level of expertise.•Combination of size and quantity of procedural chunks recalled defines STM capacity.•Procedural chunks retained in STM depend on skill level and information sort order.
When it comes to cognitive architecture and human information processing, chunks are one of the best known and most recognized constructs. Nevertheless, the nature of chunks is still very elusive, especially when it comes to chunks in procedural knowledge. This study deals with basic features of procedural information processing and examines the manifestation of chunks in procedural knowledge. The participants' task was to reconstruct sequences of chess moves. Chess was chosen as an experimental domain, because of its complexity, well-defined rules and standardized measure of chess player strength. From the results we conclude that short-term memory capacity is determined by the combination of the size and amount of procedural chunks recalled to the short-term memory. We have shown that on average, participants with more specialized knowledge operated faster and with larger chunks of procedural information than participants with less specialized knowledge. We have shown that in procedural information processing, the level of expertise and the sorting order of the retrieved information are important factors that influence the amount of procedural chunks retained in the short-term memory. Therefore, the capacity of short-term memory in complex situations cannot be expressed as a simple concept.
Abstract Our research aims to examine the effectiveness of introducing social robots as educational technology within authentic classroom activities without modifying them to be designed for a robot. ...We chose as test subject the fifth‐grade curricular topic “ The role of technology and its impact on society ”, meeting the critical stage of moral development students aged of 11–12. The study, with both experimental (EG) and control groups (CG), will be conducted over 6 weeks. This study will examine the impact of robot‐supported lessons with post‐participation testing on learning outcomes and examine students' perception of the robot in the classroom as a potential correlation with academic performance. The form of the study will be a between‐group non‐randomised controlled experiment. Control and experimental groups will be matched concerning gender, mastery of technology and previous knowledge and understanding of the curricular topic in focus. The instructional design of process‐outcome strategies will incorporate all of Bloom's taxonomic levels. In the review of related studies, we identified gaps in social robot‐supported lessons within the regular curriculum between‐group experiment. Based on a review of related research showing more focus on robot performance in the classroom from technical‐interaction aspects we want to convey from pedagogical starting point. The robot's placement in the pedagogical process will be considered an integral part of the teacher's technical environment. We will use the pre‐participation test to establish whether there is the initial equivalence between EG and CG in terms of gender, mastery of technology, and previous knowledge and understanding of the curricular topic under examination.
Data-driven intelligent tutoring systems learn to provide feedback based on past student behavior, reducing the effort required for their development. A major obstacle to applying data-driven methods ...in the programming domain is the lack of meaningful observable actions for describing the students problem-solving process. We propose rewrite rules as a language-independent formalization of programming actions in terms of code edits. We describe a method for automatically extracting rewrite rules from students program-writing traces, and a method for debugging new programs using these rules. We used these methods to automatically provide hints in a web application for learning programming. In-class evaluation showed that students receiving automatic feedback solved problems faster and submitted fewer incorrect programs. We believe that rewrite rules provide a good basis for further research into how humans write and debug programs.
•Abstract-syntax-tree (AST) patterns as attributes for classifying Prolog programs.•Identification of AST patterns for detecting errors and programming approaches.•An argument-based algorithm for ...learning rules suitable for tutoring.•Evaluation of extracted patterns and rules on 42 Prolog exercises.
Students learn programming much faster when they receive feedback. However, in programming courses with high student-teacher ratios, it is practically impossible to provide feedback to all homeworks submitted by students. In this paper, we propose a data-driven tool for semi-automatic identification of typical approaches and errors in student solutions. Having a list of frequent errors, a teacher can prepare common feedback to all students that explains the difficult concepts. We present the problem as supervised rule learning, where each rule corresponds to a specific approach or error. We use correct and incorrect submitted programs as the learning examples, where patterns in abstract syntax trees are used as attributes. As the space of all possible patterns is immense, we needed the help of experts to select relevant patterns. To elicit knowledge from the experts, we used the argument-based machine learning (ABML) method, in which an expert and ABML interactively exchange arguments until the model is good enough. We provide a step-by-step demonstration of the ABML process, present examples of ABML questions and corresponding expert’s answers, and interpret some of the induced rules. The evaluation on 42 Prolog exercises further shows the usefulness of the knowledge elicitation process, as the models constructed using ABML achieve significantly better accuracy than the models learned from human-defined patterns or from automatically extracted patterns.
ILP turns 20 Muggleton, Stephen; De Raedt, Luc; Poole, David ...
Machine learning,
2012/1, Letnik:
86, Številka:
1
Journal Article
Recenzirano
Odprti dostop
Inductive Logic Programming (ILP) is an area of Machine Learning which has now reached its twentieth year. Using the analogy of a human biography this paper recalls the development of the subject ...from its infancy through childhood and teenage years. We show how in each phase ILP has been characterised by an attempt to extend theory and implementations in tandem with the development of novel and challenging real-world applications. Lastly, by projection we suggest directions for research which will help the subject coming of age.