Self-Regulated Learning (SRL) is related to increased learning performance. Scaffolding learners in their SRL activities in a computer-based learning environment can help to improve learning ...outcomes, because students do not always regulate their learning spontaneously. Based on theoretical assumptions, scaffolds should be continuously adaptive and personalized to students' ongoing learning progress in order to promote SRL. The present study aimed to investigate the effects of analytics-based personalized scaffolds, facilitated by a rule-based artificial intelligence (AI) system, on students' learning process and outcomes by real-time measurement and support of SRL using trace data. Using a pre-post experimental design, students received personalized scaffolds (n = 36), generalized scaffolds (n = 32), or no scaffolds (n = 30) during learning. Findings indicated that personalized scaffolds induced more SRL activities, but no effects were found on learning outcomes. Process models indicated large similarities in the temporal structure of learning activities between groups which may explain why no group differences in learning performance were observed. In conclusion, analytics-based personalized scaffolds informed by students’ real-time SRL measured and supported with AI are a first step towards adaptive SRL supports incorporating artificial intelligence that has to be further developed in future research.
•Analytics-based scaffolds using trace data can support learning in real-time.•Personalized scaffolds induce metacognitive activities.•Personalized scaffolds most effective in promoting monitoring activities.•Students seldom plan and evaluate their learning and need more focused support.•Process models reveal possible explanation of missing effects on learning outcome.
Background
Assignments that involve writing based on several texts are challenging to many learners. Formative feedback supporting learners in these tasks should be informed by the characteristics of ...evolving written product and by the characteristics of learning processes learners enacted while developing the product. However, formative feedback in writing tasks based on multiple texts has almost exclusively focused on essay product and rarely included SRL processes.
Objectives
We explored the viability of using product and process features to develop machine learning classifiers that identify low‐ and high‐performing essays in a multi‐text writing task.
Methods
We examined learning processes and essay submissions of 163 graduate students working on an authentic multi‐text writing assignment. We utilised learners' trace data to obtain process features and state‐of‐the‐art natural language processing methods to obtain product features for our classifiers.
Results and Conclusions
Of four popular classifiers examined in this study, Random Forest achieved the best performance (accuracy = 0.80 and recall = 0.77). The analysis of important features identified in the Random Forest classification model revealed one product (coverage of reading topics) and three process (elaboration/organisation, re‐reading and planning) features as important predictors of writing quality.
Major Takeaways
The classifier can be used as a part of a future automated writing evaluation system that will support at scale formative assessment in writing tasks based on multiple texts in different courses. Based on important predictors of essay performance, a guidance can be tailored to learners at the outset of a multi‐text writing task to help them do well in the task.
Lay Description
What is already known about this topic?
Both product and process features should be used to inform formative feedback on writing.
Providing product‐ and process‐oriented feedback to learners is challenging.
Automatic writing evaluation systems have mainly relied upon product features.
Automated analysis of learners' trace data and their essay drafts is a promising venue.
What this paper adds?
An accurate machine learning classifier that identifies low‐ and high‐scoring essays.
The classifier utilized both product and process features.
We obtained process features from learners' trace data in digital learning environment.
We computed product features using state‐of‐the‐art text analytical methods.
Implications for practice and/or policy
The classifier can be used as a part of a future automated writing evaluation system.
We revealed learning processes and essay characteristics that influence performance.
Based on important predictors of performance, formative feedback can be given to learners.