Supervised machine learning in multimodal learning analytics for estimating success in project-based learning
- John Wiley & Sons
Multimodal learning analytics provides researchers new tools and techniques to capture different types of data from complex learning activities in dynamic learning environments. This paper investigates the use of diverse sensors, including computer vision, user-generated content, and data from the learning objects (physical computing components), to record high-fidelity synchronised multimodal recordings of small groups of learners interacting. We processed and extracted different aspects of the students' interactions to answer the following question: Which features of student group work are good predictors of team success in open-ended tasks with physical computing? To answer this question, we have explored different supervised machine learning approaches (traditional and deep learning techniques) to analyse the data coming from multiple sources. The results illustrate that state-of-the-art computational techniques can be used to generate insights into the "black box" of learning in students' project-based activities. The features identified from the analysis show that distance between learners' hands and faces is a strong predictor of students' artefact quality, which can indicate the value of student collaboration. Our research shows that new and promising approaches such as neural networks, and more traditional regression approaches can both be used to classify multimodal learning analytics data, and both have advantages and disadvantages depending on the research questions and contexts being investigated. The work presented here is a significant contribution towards developing techniques to automatically identify the key aspects of students success in project-based learning environments, and to ultimately help teachers provide appropriate and timely support to students in these fundamental aspects.
multimodal learning analytics