Temporal Models for Predicting Student Dropout in Massive Open Online Courses
Top Cited Papers
- 1 November 2015
- conference paper
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- p. 256-263
- https://doi.org/10.1109/icdmw.2015.174
Abstract
Over the past few years, the rapid emergence of massive open online courses (MOOCs) has sparked a great deal of research interest in MOOC data analytics. Dropout prediction, or identifying students at risk of dropping out of a course, is an important problem to study due to the high attrition rate commonly found on many MOOC platforms. The methods proposed recently for dropout prediction apply relatively simple machine learning methods like support vector machines and logistic regression, using features that reflect such student activities as lecture video watching and forum activities on a MOOC platform during the study period of a course. Since the features are captured continuously for each student over a period of time, dropout prediction is essentially a time series prediction problem. By regarding dropout prediction as a sequence classification problem, we propose some temporal models for solving it. In particular, based on extensive experiments conducted on two MOOCs offered on Coursera and edX, a recurrent neural network (RNN) model with long short-term memory (LSTM) cells beats the baseline methods as well as our other proposed methods by a large margin.Keywords
This publication has 18 references indexed in Scilit:
- Predicting MOOC Dropout over Weeks Using Machine Learning MethodsPublished by Association for Computational Linguistics (ACL) ,2014
- Predicting Attrition Along the Way: The UIUC ModelPublished by Association for Computational Linguistics (ACL) ,2014
- A Process for Predicting MOOC AttritionPublished by Association for Computational Linguistics (ACL) ,2014
- LIBSVMACM Transactions on Intelligent Systems and Technology, 2011
- Fast model-based protein homology detection without alignmentBioinformatics, 2007
- On the approximation capability of recurrent neural networksNeurocomputing, 2000
- Long Short-Term MemoryNeural Computation, 1997
- Finding structure in timeCognitive Science, 1990
- Learning representations by back-propagating errorsNature, 1986
- From time series to linear system—Part I. Finite dimensional linear time invariant systemsAutomatica, 1986