Imperial College London

DrKimberleyFoley

Faculty of MedicineSchool of Public Health

Research Associate
 
 
 
//

Contact

 

k.foley

 
 
//

Location

 

319Reynolds BuildingCharing Cross Campus

//

Summary

 

Publications

Citation

BibTex format

@article{Alturkistani:2020:10.2196/13851,
author = {Alturkistani, A and Lam, C and Foley, K and Stenfors, T and Van, Velthoven M and Meinert, E},
doi = {10.2196/13851},
journal = {Journal of Medical Internet Research},
pages = {1--14},
title = {Massive Open Online Course (MOOC) evaluation methods: A systematic review},
url = {http://dx.doi.org/10.2196/13851},
volume = {22},
year = {2020}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - Background: Massive open online courses (MOOCs) have the potential for broad education impact due to many learners undertaking these courses. Despite their reach, there is a lack of knowledge about which methods are used for evaluating these courses.Objective: This review aims to identify current MOOC evaluation methods in order to inform future study designs.Methods: We systematically searched the following databases: (1) SCOPUS; (2) Education Resources Information Center (ERIC); (3) IEEE Xplore; (4) Medline/PubMed; (5) Web of Science; (6) British Education Index and (7) Google Scholar search engine for studies from January 2008 until October 2018. Two reviewers independently screened abstracts and titles of the studies. Published studies in English that evaluated MOOCs were included. The study design of the evaluations, the underlying motivation for the evaluation studies, data collection and data analysis methods were quantitatively and qualitatively analyzed. The quality of the included studies was appraised using the Cochrane Collaboration Risk of Bias Tool for RCTs, the NIH - National Heart, Lung and Blood Institute quality assessment tool for cohort observational studies, and for “Before-After (Pre-Post) Studies With No Control Group”.Results: The initial search resulted in 3275 studies, and 33 eligible studies were included in this review. Studies mostly had a cross-sectional design evaluating one version of a MOOC. We found that studies mostly had a learner-focused, teaching-focused or platform-focused motivation to evaluate the MOOC. The most used data collection methods were surveys, learning management system data and quiz grades and the most used data analysis methods were descriptive and inferential statistics. The methods for evaluating the outcomes of these courses were diverse and unstructured. Most studies with cross-sectional design had a low-quality assessment, whereas randomized controlled trials and quasi-experimental studies receiv
AU - Alturkistani,A
AU - Lam,C
AU - Foley,K
AU - Stenfors,T
AU - Van,Velthoven M
AU - Meinert,E
DO - 10.2196/13851
EP - 14
PY - 2020///
SN - 1438-8871
SP - 1
TI - Massive Open Online Course (MOOC) evaluation methods: A systematic review
T2 - Journal of Medical Internet Research
UR - http://dx.doi.org/10.2196/13851
UR - https://www.jmir.org/2020/4/e13851/
UR - http://hdl.handle.net/10044/1/77312
VL - 22
ER -