Abstract Background Stealth assessment is a learning analytics method, which leverages the collection and analysis of learners' interaction data to make real‐time inferences about their learning. ...Employed in digital learning environments, stealth assessment helps researchers, educators, and teachers evaluate learners' competencies and customize the learning experience to their specific needs. This adaptability is closely intertwined with theories related to learning, engagement, and motivation. The foundation of stealth assessment rests on evidence‐cantered design (ECD), consisting of four core models: the Competency Model (CM), Evidence Model, Task Model, and Assembly Model. Objective The first step in designing a stealth assessment entails producing operational definitions of the constructs to be assessed. The CM establishes a framework of latent variables representing the target constructs, as well as their interrelations. When developing the CM, assessment designers must produce clear descriptions of the claims associated with the latent variables and their states, as well as sketch out how the competencies can be measured using assessment tasks. As the designers elaborate on the assessment model, the CM definitions need to be revisited to make sure they work with the scope and constraints of the assessment. Although this is the first step, problems at this stage may result in an assessment that does not meet the intended purpose. The objective of this paper is to elucidate the necessary steps for CM development and to highlight potential challenges in the process, along with strategies for addressing them, particularly for designers without much formal assessment experience. Method This paper is a methodological exposition, showcasing five examples of CM development. Specifically, we conducted a qualitative retrospective analysis of the CM development procedure, wherein participants unfamiliar with ECD applied the framework and showcased their work. In a stealth assessment course, four groups of students (novice stealth assessment designers) engaged in developing stealth assessments for challenging‐to‐measure constructs across four distinct projects. During their CM development process, we observed various activities to pinpoint areas of difficulty. Results This paper presents five illustrative examples, including one for assessing physics understanding and four for the development of CMs for four complex competencies: (1) systems thinking, (2) online information credibility evaluation, (3) computational thinking, and (4) collaborative creativity. Each example represents a case in CM development, offering valuable insights. Conclusion The paper concludes by discussing several guidelines derived from the examples discussed. Emphasizing the importance of dedicating ample time to fine‐tune CMs can significantly enhance the accuracy of assessments related to learners' knowledge and skills. It underscores the significance of qualitative phases in crafting comprehensive stealth assessments, such as CMs, alongside the quantitative statistical modeling and technical aspects of these assessments.
Lay Description What is currently known about this topic? Stealth assessment represents an unobtrusive, automated formative assessment method. This method uses learning analytics within digital learning environments (e.g., games). The main purpose is to assess and foster the competencies of diverse learners. What does this paper add? This paper serves as a conceptual and methodological guide. This paper focuses on the critical process of competency model development. Competency model development is a crucial step in the creation of stealth assessments. Implications for practice/or policy Learning scientists and assessment designers can leverage this paper as a resource. Assessment designers can benefit from this paper and see various examples of the process.
June 2019 saw the 25th anniversary of the World Conference on Special Needs Education, which was co-organized by UNESCO and the Ministry of Education and Science of Spain, and held in the city of ...Salamanca. It led to the Salamanca Statement and Framework for Action on Special Needs Education, arguably the most significant international document that has ever appeared in the field of special education. In so doing, it endorsed the idea of inclusive education, which was to become a major influence in subsequent years. The articles in this special issue illustrate the ways in which the Salamanca Statement has and still is influencing the development of policies and practices across the world. In this editorial, we provide readers with some relevant background to these developments.
Abstract Background Children with low reading skills are less frequently engaged in reading activities and therefore the likelihood of improving their reading skills decreases. Digital game‐based ...interventions have emerged as a promising tool for promoting reading development in children, particularly those with reading difficulties. As syllable‐based reading interventions are likely to increase word reading skills in low‐skilled readers, we developed a new reading intervention application that emphasizes syllable segmentation and integrates proven elements of digital game‐based learning. The intervention aimed to promote phonological recoding and consolidating orthographic representation of syllables. Objectives The present study investigated the effects of the newly developed syllable‐based reading intervention application on general word recognition skills, phonological recoding processes, orthographic decoding processes and text‐level reading comprehension skills in German second graders. Methods In a quasi‐experimental design, children with low word recognition skills were randomly assigned to a treatment group ( n = 66) or a wait‐list group ( n = 66). General word recognition skills, phonological recoding processes, orthographic decoding processes and text‐level reading comprehension were measured with standardized German reading tests before and after the treatment group received the digital reading intervention for 20 sessions. Results Results indicated that the children in the treatment group showed significant improvement in general word recognition and in phonological recoding processes compared to equally low‐skilled untreated children in the wait‐list group. Orthographic decoding processes improved only in children with less severe impairments, whereas no significant improvements were found in text‐level reading comprehension. Take Aways The digital reading intervention is a promising approach for supporting word reading in low‐skilled reading second graders and can serve as an effective intervention tool for this target group.
As evidence becomes increasingly important in educational policy, it is essential to understand how research design might contribute to reported effect sizes in experiments evaluating educational ...programs. A total of 645 studies from 12 recent reviews of evaluations of preschool, reading, mathematics, and science programs were studied. Effect sizes were roughly twice as large for published articles, small-scale trials, and experimenter-made measures, compared to unpublished documents, large-scale studies, and independent measures, respectively. Effect sizes were significantly higher in quasiexperiments than in randomized experiments. Excluding tutoring studies, there were no significant differences in effect sizes between elementary and middle/high studies. Regression analyses found that effects of all factors maintained after controlling for all other factors. Explanations for the effects of methodological features on effect sizes are discussed, as are implications for evidence-based policy.
Flipped classroom approaches remove the traditional transmissive lecture and replace it with active in-class tasks and pre-/post-class work. Despite the popularity of these approaches in the media, ...Google search, and casual hallway chats, there is very little evidence of effectiveness or consistency in understanding what a flipped classroom actually is. Although the flipped terminology is new, some of the approaches being labelled 'flipped' are actually much older. In this paper, we provide a catch-all definition for the flipped classroom, and attempt to retrofit it with a pedagogical rationale, which we articulate through six testable propositions. These propositions provide a potential agenda for research about flipped approaches and form the structure of our investigation. We construct a theoretical argument that flipped approaches might improve student motivation and help manage cognitive load. We conclude with a call for more specific types of research into the effectiveness of the flipped classroom approach.
Lay Description What is already known about this topic Historically, artificial intelligence (AI) education focused on theory and skills, but now there are AI competitions that encourage real‐world ...problem‐solving (AIdea. Competitions. 2023. https://aidea-web.tw/about?lang=zh ). Competition‐based learning bridges the gap between academia and industry, fostering creativity and talent discovery (Abou‐Warda and Roberts. International Journal of Educational Management . 2016; 30(5): 698). Computer science education globally uses English as the primary language (Alhamami. Education and Information Technologies , 2021; 26: 6549–6562). Non‐English speaking nations are adopting English as the medium of instruction, impacting teaching effectiveness (Alhamami. Education and Information Technologies , 2021; 26: 6549–6562). What this paper adds This study combines online problem‐solving competitions with machine learning courses, using both Chinese and English instruction. Individual tutoring tailored to each team's competition topic provided real‐world problem‐solving experience and fostered school‐enterprise interactions. A rubric was created for evaluating domain knowledge, proposal writing, presentation skills, AI model accuracy, and competition outcomes by external experts, instructors, TAs, and peers. Implications for practice and/or policy Combining competition‐based learning with machine learning courses can boost students' domain knowledge, competition skills, and outcomes. This study confirms that using Chinese instruction in machine learning benefits non‐native English‐speaking students more than English instruction. Our teaching approach for information technology courses can be applied to develop students' relevant skills in this field.
Abstract Background Numerous higher education institutions worldwide have adopted English‐language‐medium computer science courses and integrated online problem‐solving competitions to bridge gaps in theory and practice (Alhamami Education and Information Technologies , 2021; 26: 6549–6562). Objectives This study aimed to investigate the factors influencing the use of online competitions in machine learning courses and their impact on student learning. We also analyse disparities in learning outcomes and instructional language effects (Chinese vs. English). Methods Among 123 participants at northern Taiwan university, 74 chose Chinese instruction (CMI), and 49 opted for English instruction (EMI). The course spanned 18 weeks: team formation in week one, data analysis, machine learning, and deep learning from week 2 to 8, draft proposals and oral presentations by week 9, instructor guidance in weeks 9–17, followed by off‐campus competitions. In week 18, students presented projects for evaluation by judges. Results The results showed improved scores in competition proposal writing and oral presentations, especially for CMI students, who excelled in these areas and in terms of creativity. CMI students emphasized domain knowledge, implementation completeness, and technical depth in proposals. The EMI students focused on implementation completeness and artificial intelligence model accuracy, along with creativity. Conclusion CMI students achieved superior outcomes in machine learning courses, particularly in terms of competition proposals, oral presentations, and increased creativity. Instructional language choice significantly influenced learning trajectories, leading to distinct knowledge development focuses for CMI and EMI.
Abstract Background Developments in educational technology and learning analytics make it possible to automatically formulate and deploy personalized formative feedback to learners at scale. However, ...to be effective, the motivational and emotional impacts of such automated and personalized feedback need to be considered. The literature on feedback suggests that effective feedback, among other features, provides learners with a standard to compare their performance with, often called a reference frame. Past research has highlighted the emotional and motivational benefits of criterion‐referenced feedback (i.e., performance relative to a learning objective or mastery goal) compared to norm‐referenced feedback (performance relative to peers). Objectives Despite a substantial body of evidence regarding reference frame effects, important open questions remain. The questions encompass, for example, whether the benefits and drawbacks of norm‐referenced feedback apply in the same way to automated and personalize feedback messages and whether these effects apply to students uniformly. Further, the potential impacts of combining reference frames are largely unknown, even though combinations may be quite frequent in feedback practice. Finally, little research has been done on the effects of reference frames in computer‐supported collaborative learning, which differs from individual learning in meaningful ways. This study aims to contribute to addressing these open questions, thus providing insights into effective feedback design. Specifically, we aim to investigate usefulness perceptions as well as emotional and motivational effects of different reference frames—and their combination—in automated and personalized formative feedback on a computer‐supported collaborative learning task. Methods A randomized field experiment with four feedback conditions (simple feedback, norm‐referenced, criterion‐referenced, and combined feedback) was conducted in a course within a teacher training program ( N = 282). Collaborative groups worked on a learning task in the online learning environment, after which they received one of four possible automated and personalized formative feedback. We collected student data about feedback usefulness perceptions, motivational regulation, and achievement emotions to assess the differential effects of these feedback conditions. Results All feedback types were perceived as useful relative to the simple feedback condition. Norm‐referenced feedback showed detrimental effects for motivational regulation, whereas combined feedback led to more desirable motivational states. Further, criterion‐referenced feedback led to more positive emotions for overperformers and to more negative emotions for underperformers. The findings are discussed in light of the broader feedback literature, and recommendations for designing automated and personalized formative feedback messages for computer‐supported collaborative learning are presented.
Lay Description What is already known about this topic Automated and personalized feedback based on Learning Analytics can provide students with feedback at scale. Reference frames, a key design feature of any feedback, are essential to consider regarding their emotional and motivational impacts. What this paper adds Students deemed all automated and personalized feedback more useful than the minimal feedback condition. The choice of reference frames matters for personalized and formative feedback in a computer‐supported collaborative learning task. The social comparison reference frame was largely detrimental, whereas a combination of reference frames can induce desirable motivational regulation. Criterion‐referenced feedback led to more positive emotions for overperformers and more negative emotions for underperformers. Implications for practice and/or policy Practitioners should carefully consider the reference frames that underlie their feedback designs. Social comparison should largely be avoided, unless in combination with other, more informative feedback information.
Abstract Background The voices virtual on‐screen characters use has been shown to impact learning and perception outcomes. Recent replication research on these voices showed that synthetic voices ...were not a detriment if produced by a high‐quality engine with clear articulation. The current manuscript examines previous accent research that utilized now outdated engines, to determine if the impact of accents still holds with high‐quality engines and voice actors. Objectives To investigate the impact on learning and perceptions with pedagogical agents speaking in accented voices, synthetic voices, and the interaction between the two using modern voice engines. Methods This study is a between‐subjects two (accent) by two (type) factorial design to determine the impact the voice accent, voice type, and the interaction have on learning retention, learning transfer, mental effort efficiency, and perception measures. 197 participants were recruited from the online Amazon's Mechanical Turk with qualifications of 18 years of age, “normal or corrected‐to‐normal hearing”, and located with the continental United States of America. Results and Conclusions There were no significant differences between the accented conditions or interaction effects, deviating from previous research that showed impact of accents on learning. The synthetic condition had significantly lower knowledge retention, knowledge transfer, mental effort efficiency, and perception measures than the human professional. These findings demonstrate the importance of considering voice quality when designing pedagogical agents. Previous research showed synthetic voices perform as well as the average voice, and this research continues the narrative of voice quality by showing professional recordings outperform modern synthetic engines.
Lay Description What is currently known about this topic Accents and synthetic voices can impact learning from virtual humans. What does this paper add Shows professional human voices outperform modern synthetics, a quality effect. Implications for practice/or policy Creators of educational materials should aim for professional voice actors.
Abstract Background Traditionally, understanding students' learning dynamics, collaboration, emotions, and their impact on performance has posed challenges in formative assessment. The complexity of ...monitoring and assessing these factors have often limited the depth and breadth of insights. Objectives This study aims to explore the potential of multimodal learning analytics as a formative assessment tool in math education. The focus is on discerning how collaborative discourse behaviours and emotional indicators interplay with lesson evaluation performance. Methods Using undergraduate students' multimodal data, which includes collaboration data, facial behaviour data, and emotional data, the study explored the patterns of collaboration and emotion. Through the lens of multimodal learning analytics, we conducted exploratory data analysis to identify meaningful relationships between specific types of collaborative discourse, facial expressions, and performance indicators. Moreover, the study evaluated a machine learning model's potential to predict target learning outcomes by integrating data from multiple channels. Results The analysis revealed key features from both discourse and emotion data as significant predictors. These findings underscore the potential of a multimodal analytical approach in understanding students' learning process and predicting outcomes. Conclusions The study emphasizes the importance and feasibility of a multimodal learning analytic approach in the context of math education. It highlights the academic and practical implications of such an approach, along with its limitations, pointing towards future research directions in this area.
Lay Description What is currently known about this topic? Learning analytics has emerged as a powerful tool that aligns with the purpose of formative assessment, enabling educators to monitor and understand students' learning. The primary focus of traditional learning analytics research has been on online learning environments, relying mostly on unimodal data. This perspective offers a limited view, as learning is inherently multimodal. Formative assessment stands as a cornerstone of effective learning, with a rich body of evidence confirming its positive role in improving teaching and learning processes. What does this paper add? The study implemented game‐based learning lesson critique activity with students and explored the use of Minecraft education version as a tool for interactive math lessons. The study showcased the complex patterns of collaboration and interactions during game‐based learning activities, highlighting the integral role of teamwork. By analysing verbal and non‐verbal cues, the study illuminated various features of collaborative dynamics when evaluating game‐based lesson activities with peers. Implications for practice/or policy The study's findings can inform the design and implementation of multimodal learning analytics as a formative assessment tool. This can promote effective formative assessment and adaptive support mechanisms for students in mathematics education within a digital game‐based learning environment. By identifying areas of improvement and specific needs, practitioners can tailor interventions to address challenges faced by learners in their collaborative efforts and digital content evaluation. The study contributes to the growing literature on multimodal data analytics and its applications in education, projecting its role in various educational contexts and fostering innovative methods for data analysis and interpretation.