Published on in Vol 22, No 7 (2020): July

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/17318, first published .
Benefits of Massive Open Online Course Participation: Deductive Thematic Analysis

Benefits of Massive Open Online Course Participation: Deductive Thematic Analysis

Benefits of Massive Open Online Course Participation: Deductive Thematic Analysis

Original Paper

Department of Learning, Informatics, Management and Ethics, Karolinska Institutet, Stockholm, Sweden

*all authors contributed equally

Corresponding Author:

Per J Palmgren, DC, PhD

Department of Learning, Informatics, Management and Ethics

Karolinska Institutet

Tomtebodavägen 18a

Stockholm, 171 77

Sweden

Phone: 46 8 524 85 294

Email: per.palmgren@ki.se


Background: Massive open online courses (MOOCs), as originally conceived, promised to provide educational access to anyone with an internet connection. However, the expansiveness of MOOC education has been found to be somewhat limited. Nonetheless, leading universities continue to offer MOOCs, including many in the health sciences, on a number of private platforms. Therefore, research on online education must include thorough understanding of the role of MOOCs. To date, studies on MOOC participants have focused mainly on learners’ assessment of the course. It is known that MOOCs are not reaching the universal audiences that were predicted, and much knowledge has been gained about learners’ perceptions of MOOCs. However, there is little scholarship on what learners themselves gain from participating in MOOCs.

Objective: As MOOC development persists and expands, scholars and developers should be made aware of the role of MOOCs in education by examining what these courses do offer their participants. The objective of this qualitative synthesis of a set of MOOC evaluation studies was to explore outcomes for MOOC learners, that is, how the learners themselves benefit from participating in MOOCs.

Methods: To explore MOOC learners’ outcomes, we conducted a qualitative synthesis in the form of a deductive thematic analysis, aggregating findings from 17 individual studies selected from an existing systematic review of MOOC evaluation methods. We structured our inquiry using the Kirkpatrick model, considering Kirkpatrick levels 2, 3, and 4 as potential themes in our analysis.

Results: Our analysis identified six types of Kirkpatrick outcomes in 17 studies. Five of these outcomes (learning/general knowledge, skills, attitudes, confidence, and commitment) fit into Kirkpatrick Level 2, while Kirkpatrick Level 3 outcomes concerning behavior/application were seen in four studies. Two additional themes were identified outside of the Kirkpatrick framework: culture and identity outcomes and affective/emotional outcomes. Kirkpatrick Level 4 was not represented among the outcomes we examined.

Conclusions: Our findings point to some gains from MOOCs. While we can expect MOOCs to persist, how learners benefit from the experience of participating in MOOCs remains unclear.

J Med Internet Res 2020;22(7):e17318

doi:10.2196/17318

Keywords



When the first massive open online course (MOOC) was offered in 2008, the MOOC format—free, online, and open to anyone with an internet connection—was touted as revolutionary for its potential to democratize access to educational opportunities due to its theoretically universal availability [1-3]. The earliest MOOCs used a connectivist paradigm in which the course was built from networks of online resources and relied on openness and participation from learners. These so-called cMOOCs had the potential to allow learners to participate in their own education outside the traditional, face-to-face classroom setting and to connect with learners worldwide [4]. Extended MOOCs (xMOOCs) brought the MOOC format back to a more traditional structure, with instructors determining the content while still providing “open” availability to anyone with internet access. In practice, there are limits to what this expansive availability has accomplished [2,5]. However, as MOOCs persist, it is useful to explore their role in education by examining what they do offer their participants.

Learning is a complex phenomenon that can be described from different perspectives. Understanding learning is about understanding not only learning processes but also the conditions that influence—and are influenced by—the learning process [6]. In this paper, learning is understood from a constructivist and social-constructivist perspective in which reality and new understanding are constructed by learners on the basis of their previous knowledge, perceptions, and experiences. Learning thus consists of contextual aspects (ie, teachers present information in a way that enables learners to construct meaning on the basis of their own experiences, with a focus on situating learning in an authentic activity); cognitive aspects (ie, recognizing individuals’ perception, memory, and meaning-making); and social aspects (ie, converging on learning as a social activity that occurs through interactions between the learner and others) [7,8]. This conception of learning thus reintegrates the artificial and no longer useful distinction between cMOOCs and xMOOCs [9].

A number of systematic reviews have examined MOOCs [4,10-17]. These reviews indicate that much research on MOOCs focuses on evaluating noncompletion rates and retention vs attrition; learner motivation and engagement as well as other behavioral elements, and how these relate to retention and achievement; implications of the latter for MOOC design; and learners’ own assessments of the courses [3,4,10,12,16,18]. Research also points to a lack of studies on learners’ own experiences and outcomes [3,4,10]; however, there are some exceptions [14,19]. For example, in their review, Pilli and Admiraal [19] investigated MOOC learner outcomes with the intention of informing MOOC course design. Joksimovic et al [14] argued that outcomes and learner engagement are commonly differentiated in the MOOC literature; however, their systematic review proposes an approach that reconnects the two, especially for MOOCs that do not include assessments (eg, cMOOCs as originally conceived). Joksimović et al [14] built on a model by Reschly and co-workers [20] that conceives of learning outcomes as “proximal” or “distal,” with academic, social, and affective outcomes within each; they modified this model for the “nonformal, digital educational settings” of MOOCs [14]. Despite their work on outcomes, Joksimovic and colleagues reiterated the finding that attempts to measure or evaluate the benefits to learners of participating in MOOCs have been mostly limited to date.

Another systematic review by Rowe et al [17] investigated the utility of open online courses (OOCs, including MOOCs) in health professions education. They evaluated the available research with a framework that included five “outcome” categories, including effectiveness (increase in learner knowledge), learner experiences, feasibility, pedagogy, and economics; they concluded that the available evidence neither unequivocally supports nor refutes the use of such courses. Their review was limited to the health professions; however, it highlighted the absence of rigorous research on MOOCs and the concurrent persistence of these courses. Their “effectiveness” category further highlighted the absence of research on benefits to MOOC learners, specifically in the health professions. They argued that the application of MOOCs in health professions education should be limited until a great deal more quality research is performed [17].

In their recent systematic review, Alturkistani et al [21] also added to the discourse on MOOC evaluation methods. Alturkistani et al identified three “evaluation-focused categories” among the studies they reviewed: learner-focused, teaching-focused, and MOOC-focused [21]. We approached this review as a jumping-off point to further synthesize understanding of MOOC learner outcomes. Here, we unpack the learner-focused category in [21] and, more specifically, the “learning outcomes and experience” subcategory to investigate the learner outcomes for the included MOOCs. In our study, “learner outcomes” are direct statements that describe the knowledge, skills, and attitudes that learners have demonstrated or are expected to reliably demonstrate when successfully completing a course. Learner outcomes is an understudied area that warrants further investigation, as MOOCs are a learning environment distinct from traditional classrooms and even other forms of e-learning, and they continue to be embraced as an educational modality [22].

Thus, despite their persistence, MOOCs have not lived up to the early expectation that they would allow widespread, nearly universal access to education. For example, there is consistent evidence that learners who use MOOCs, and indeed those who are more likely to complete them, are generally more educated and affluent [1,23,24]. There is also insufficient evidence that MOOCs are useful in areas such as health professions education [17]. MOOC learners are heterogeneous along numerous dimensions, including native language, prior training, age, economic status, and geographic location [24]. The heterogeneity of the expectations and goals of MOOC learners has also undoubtedly contributed to the difficulty of evaluating MOOCs and characterizing their benefits, a difficulty that is illustrated below in the heterogeneity of the studies reviewed. Thus, if MOOCs are not, in practice, democratizing education, and they have not lived up to traditional learning settings for at least some professional fields, what are they offering? In this study, we focus our attention on what learners do gain from participating in MOOCs, including but not limited to performance measures; that is, we explore how learners benefit from the experience of participating in MOOCs, including and beyond outcomes directly related to learning.


We conducted a qualitative synthesis in the form of a deductive thematic analysis, aggregating findings from individual studies, to explore MOOC learners’ outcomes. The datasets used and analyzed during the current study are available from the first author on reasonable request. To structure our inquiry, we relied on a commonly used framework for evaluating learning with applications in multiple learning and training settings: the Kirkpatrick model [25]. This model frames training on four levels: (1) reaction, (2) learning, (3) behavior, and (4) results. A more recent version [26] updates and clarifies the model, proposing that reaction includes customer satisfaction, engagement, and relevance; learning includes knowledge, skills, attitude, confidence, and commitment; and behavior refers to how the learner applies the learning “on the job.” The more recent version of the behavior level adds “processes and systems that reinforce, encourage, and reward performance of critical behaviors on the job” [26], which can be seen as catalysts for applying what has been learned. These processes and systems, which include job aids, coaching, work review, and incentive systems, are referred to in [26] as “required drivers” or factors that increase the likelihood that people will retain and apply what they have learned in a given setting, referred to as “required drivers”. Results are the targeted outcomes of the training, such as whether the results of the training are seen within an organization; the more recent version adds “leading indicators” (short-term measures that can indicate whether the results are likely to occur) [26].

The studies in the current synthesis derive from Alturkistani et al’s systematic review of MOOC evaluation methods [21]. Their review included studies from 2008 to 2018 that focused primarily on MOOC evaluation and studies that reviewed or applied MOOC evaluation methods. Both quantitative and qualitative studies were included, after a careful assessment of their methodological quality, as well as grey literature. During the last few years, the contribution of qualitative evidence has been acknowledged within research [27]. This is in line with the epistemological stance of this review. The complete search strategy and further details of the source review [21] can be found in [18]. Alturkistani et al [21] identified 3275 records; after a review procedure, the final review included 33 studies.

Specifically, Alturkistani et al’s “learning outcomes and experiences” subcategory was the basis for the current synthesis, as we looked at what learners gain from the experience of participating in MOOCs. This subcategory included 21 studies. We reviewed each paper in this category for findings that included learners’ outcomes. Each study was examined for outcomes specific to the learners themselves. We did not include measures of engagement, motivation, completion, or attrition in our analysis unless they were clearly tied to the outcomes for learners. In an additional step intended to capture all learner outcomes, we examined Multimedia Appendix 3 in Alturkistani et al’s review [21], which included all 33 studies. As a result of this review, we excluded 12 studies that did not include clear outcomes for learners (Figure 1), which left 21 studies for our analysis. As the analysis proceeded, we determined that the outcomes in 6 of these 21 studies were not clear enough to include. Notably, we did include one study [28] that was not included in Alturkistani et al’s “learner outcomes and experience” category. Of the resulting 16 studies for analysis, 4 had more than one outcome. Multimedia Appendix 1 describes this procedure in detail.

More specifically, in this qualitative synthesis, we performed a deductive thematic analysis [29] where the starting themes were the four Kirkpatrick levels. We extracted all outcomes from the 16 studies; we then placed these in Kirkpatrick level 2, 3, or 4. After this first coding, which was conducted by ERB, TS and PJP reviewed the results. Second, ERB further analyzed the findings in each category according to the subthemes within each Kirkpatrick level. Subsequently, the findings were discussed and subjected to adjustments until consensus among all investigators was reached. Although the aforementioned steps appear to be consecutively ordered, the process of analysis and search for patterns was in no way linear; rather, it was iterative and recursive. No software program was used to aid the analysis. The structure of our analysis allows for the possibility that the same study will have multiple outcomes and thus will appear under more than one level. Level 1 (reaction) in the Kirkpatrick model was not of interest to our investigation, as there is a great deal of existing research on learners’ assessment of MOOCs.

Outcomes that could not be matched with the Kirkpatrick levels were set aside for a separate inductive thematic analysis, which is presented as “Outcomes beyond Kirkpatrick.”

Figure 1. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram: systematic review (A) for a synthesis paper on MOOC learning outcomes (B). Modified from Alturkistani et al [21].
View this figure

Our analysis resulted in six types of outcomes. These are summarized in Table 1 as framed by the levels in the Kirkpatrick model.

Kirkpatrick Level 2: Learning

Our deductive analysis showed that 15/16 (94%) of the examined studies included one or more outcomes corresponding to Kirkpatrick Level 2. Thus, the Learning theme here incorporates concepts such as knowledge, skill, attitude, confidence, and commitment. Each subtheme is presented using the identified data and illustrated with supporting quotations.

Subtheme: Knowledge

Most of the Level 2 outcomes we identified were scores or survey items that assessed knowledge in some form. For example, in their MOOC on new media in teaching and learning, Chen et al [30] reported ”learning performance” via quiz scores and a final paper, for which the participants could earn ”Excellence Awards.” Four studies in our sample took a longitudinal view of learning outcomes via a pretest/posttest design. For example, Konstan et al [31] used a longitudinal design to test knowledge of technology that predicts preferences based on previous behavior (recommender systems technology); based on precourse and postcourse test scores within their MOOC, the gains in knowledge measured at the end of the course persisted at a 5-month follow-up in most cases. Further, in a MOOC designed to prepare medical students for global health experiences, Jacquet and colleagues [32] found an increase in post-MOOC compared to pre-MOOC test scores. Next, using average quiz and homework scores, Liang et al [33] reported an increase in quiz and homework scores enhanced by participation in online activities. Cross [34] used preassessments and postassessments to track changes in knowledge on a scale from “novice” to “expert,” while Colvin et al [35] reported improved scores on postcourse versus precourse tests in introductory physics, and Mackay et al [36] saw a postcourse increase in scores on their assessment of participants’ knowledge about animal welfare.

Table 1. Outcomes of MOOC studies framed by Kirkpatrick Level 2 or Level 3.
Kirkpatrick level, subtheme, and studyData collectionData analysisOutcome variablesOutcome findings
Level 2: Learning

1.General learning/change in knowledge


Chen et al (2015) [30]Scores on quizzes and final paperInferential statisticsPossible “Excellent Paper,” “Excellent Participation,” and “Excellent Group Member” awardsLearners received these awards if they fulfilled the criteria


Konstan et al (2015) [31]Three-part longitudinal design: precourse, postcourse, and 5-month follow-up “knowledge tests” and surveysInferential statistics; qualitative analysisAssessed knowledge of recommender systemsaGains in knowledge and 5-month retention of acquired knowledge


Jacquet et al (2018) [32]LMSb data; pre-MOOC and post-MOOC knowledge testsInferential statisticsScore on knowledge testIncreased knowledge score from pretest to posttest


Liang et al (2014) [33]Assessments: quizzes and homeworkInferential statisticsAverage assessment scoreIncrease in assessment score related to degree of participation


Cross (2013) [34]Precourse and postcourse surveys; LMSDescriptive statisticsKnowledge: “novice” to “expert”aIncrease in knowledge


Colvin et al (2014) [35]Normalized gain between pretests and posttests in introductory physics; “ability” based on test items attempted, analyzed with Item Response Theory (IRT)Inferential statisticsComparison of pre-MOOC and post-MOOC physics knowledge and “ability”Learning (measured via posttest score) across several cohorts identified using IRT


MacKay et al (2016)[36]Precourse and postcourse assessments of animal welfare knowledgeInferential statisticsScores on animal welfare knowledge assessment

Increased scores

2. Skill


Brunton et al (2017)[37]Weekly Likert scale quizzes during the MOOC: “individual digital readiness tools” and postcourse quizDescriptive statisticsPreparedness for online learningaSelf-assessed changes in preparedness for online learning


Rubio (2015)[38]Precourse and postcourse comprehensibility ratingsInferential statisticsSpanish comprehensibility (language pronunciation)Increased comprehensibility in postcourse ratings


Stephens and Jones (2014) [39]Precourse and postcourse surveys with mostly open-ended itemsContent analysisSkills discoveryaTechnological skills


Liu et al (2014) [40]End-of-course surveys (Likert scale and open-ended); email interviewsDescriptive and thematic analysis (focused coding)Three things students learnedaSkills in data visualization, critiquing, and creating infographics

3. Commitment


Alturkistani et al (2018) [41]Case studies; interviewsThematic analysisLearning achievement; use of information in the workplaceaIntention to apply knowledge


4. Attitude


MacKay et al (2016) [36]Multiple-choice quizzes; confidence and attitude surveys (mostly Likert scale)Inferential statisticsChange in attitudes; certificate of achievement for completionaChange in attitude

5. Confidence


Hossain et al (2015) [28]Ten-point scale; confidence-to-treatInferential statisticsConfidence to treat spinal cord injuryaGains in confidence


Cross (2013) [34]Precourse/postcourse survey; LMSDescriptive statisticsConfidence to apply learningaGains in confidence


Mackness et al (2013) [42]Interviews (face-to-face and email) and focus groups; assessment of microteachingQualitative case study approachConfidence to participate in social learning environmentsaGains in confidence


Lei et al (2015) [43]Pre-MOOC and post-MOOC surveys; forum threadsSentiment analysisIdentity and confidenceaConfidence in work; confidence to inspire


Milligan and Littlejohn (2014) [44]Interviews mid-MOOCQualitative analysisChanges in practiceaConfidence about practices on the job
Level 3: Behavior


Behavior/Application


Milligan and Littlejohn (2014) [44]Survey and interviewQualitative analysisApplication of learning in professional practiceaIntegrating new understanding in practice


Lei et al (2015) [43]pre-MOOC and post-MOOC surveys; forum threadsSentiment analysisEffects on learners and communityaBringing knowledge back to community


Cross (2013) [34]Precourse/postcourse survey; LMSDescriptive statisticsChanges in practiceaImplementation of tools in course design


Konstan et al (2015) [31]Follow-up interview and surveyInferential statisticsApplication of new recommender system skillsaApplication of systems at work, school, business

aIncludes a self-report.

bLMS: learning management system.

Subtheme: Skill

We found several examples of skill outcomes, including self-assessed preparedness (readiness for online education) [37] and improvement in Spanish language pronunciation and comprehensibility measured by pre-MOOC and post-MOOC assessments [38]. Further, participants in a library and information science MOOC were asked “What did you gain most from taking part in the MOOC?” Their responses included “Students gained new technological skills through their learning experience [39]. Liu et al [40] found that learners gained skill through learning to “visualize data and critique infographics (and) learning visualization concepts and…tool use”; these were the most frequently cited “three things [students] had learned” in a journalism MOOC.

Subthemes: Commitment, Attitude, and Confidence

Other Level 2 outcomes were commitment, as shown through intention to apply knowledge [41]; attitude about animal welfare [36]; and confidence to treat patients, as measured in a randomized control trial study comparing a MOOC with a self-directed online learning module [28]. Additionally, Cross [34] reported that learners gained confidence with regard to applying what they had learned, and Mackness et al [42] also reported confidence to participate in various interactive learning activities:

They also gain the confidence to attend and contribute to live synchronous sessions, to openly share their work and ideas, and to cooperate and/or collaborate in social networking environments. “They shift from being consumers to producers.”

In their MOOC on Asian vernacular architecture, Lei et al [43] used a case study design to investigate learners’ postlearning experiences, asking, “How has the course influenced learners and their surrounding community?” This influence is reflected in the following learner’s experience:

It is through learning that I have gained the most confidence, in my identity and in my work. And I hope that this course would be the one of many stepping stones towards me being able to help inspire and nurture future generations….

Using a clinical trials MOOC, Milligan and Littlejohn [44] asked learners halfway through the course “to reflect on how their practice had changed as a result of the course.” Some learners had already seen an effect on their confidence and perspective: “These respondents reported a range of general benefits: that the course had given them a new perspective, had made them assured, or had helped them bring a greater criticality to their practice.” One participant stated, “I know why and why not…you have an overview, I cannot say I apply everything in my day to day work, but the fact that you feel more confident, for me, it helps a lot.” This outcome in turn intersects with Kirkpatrick Level 3, as discussed in the next section.

Kirkpatrick Level 3: Behavior: Application

Our analysis found 4/21 studies (19%) with evidence of Level 3 outcomes. Level 3 includes application via critical behaviors plus the presence of outcomes that make it more likely that people will retain and apply what they have learned in a given setting (the abovementioned catalysts for application or “required drivers”).

In addition to effects on confidence (Level 2), Milligan and Littlejohn [44] found evidence of Level 3 outcomes from their clinical trials MOOC; in answer to the same question as above (how their practice changed as a result of the course), most learners reported having already incorporated their learning. For example, the respondent quoted above also reported immediate effects: “Well, it gives me a better understanding of why I do what I do…I understand why I have to submit my protocol or a complete or total submission to authorities, how a protocol has been developed.” [44] Another respondent said, “It is much, much better, I could address all of the challenges much better and make better decisions, and actually I participate with this CRO in developing the protocol and the study documents and everything.”

Lei and colleagues [43] described effects on how the learners brought their experience back to their communities, a behavioral application which reflects Kirkpatrick Level 3. For example, one participant from an area damaged by earthquakes reflected:

This course helped me to see the significance of the collapsed houses, temples, shrines, monuments and courtyards in a different angle which otherwise I would not have been able to see…I have already started contributing my knowledge with the local community as we come together to rebuild what has been destroyed.

Cross [28] described learners’ goals, including plans to implement tools from the MOOC in their course design; some learners reported having already done so, which is another example of application of the MOOC experience. Employing a longitudinal study design, Konstan et al [31] investigated MOOC learners’ application of course content (recommender systems technology). Kirkpatrick Level 3 behaviors are evident in the participants’ reports of incorporating the systems at work, school, or in entrepreneurial settings, and some also applied the underlying algorithms in other contexts.

Kirkpatrick Level 4

In this qualitative synthesis, we did not find any data congruent with Kirkpatrick Level 4, which includes outcomes and “leading indicators.”

Outcomes Beyond Kirkpatrick

Not all of the outcomes described in the studies are congruent or align well with the Kirkpatrick framework; hence, we present these outcomes separately here. After our inductive thematic analysis, we identified two themes among these outcomes: “culture and identity” outcomes and “affective/emotional” outcomes. Culture and identity outcomes included “insights about themselves through personal reflection about their learning styles, professional practices, and the ways they view the world” [39], as well as connection to a community, whether of fellow educators [39,42] or those with a shared cultural heritage [43]. Affective outcomes such as “excitement” and “inspiration” are evident in [39], where learners gained “inspiration, energy, and excitement about the field.”


Principal Results

In this qualitative analysis, we explored the benefits that MOOCs in a broad range of subjects offer their participants. We synthesized the types of outcomes reported in a set of MOOC studies, including but not limited to outcomes that assess learning in some way. Using the Kirkpatrick model as a framework, the most prominent findings were that most of the MOOCs described in the included studies only had outcomes that could be categorized as Kirkpatrick Level 2. Kirkpatrick Level 3 outcomes were also represented, although these were not as common as Level 2 outcomes. We did not observe any Kirkpatrick Level 4 outcomes in the data we analyzed. If a MOOC were to aim for or result in Level 4 outcomes, we would expect to see changes at the organizational level. This might reveal itself in the form of implemented changes in policy in a health care setting after a group of managers participated in a policy MOOC, or in the case of higher education, a change in pedagogical training for educators after a MOOC was attended by several faculty members. Our complementary analysis of outcomes that did not align with Kirkpatrick yielded two additional themes.

Previous Research

Previous research has shown that students generally perform better in face-to-face courses than in online courses [45], and several of the studies in our review used comparisons between MOOC and non-MOOC learning contexts. The studies analyzed in this study did not report outcomes that were unique to MOOCs; however, they did provide insight into what MOOCs do and do not offer to participants. For example, in a randomized control trial by Hossain et al [28] comparing a self-paced online course with an online course with MOOC-based guidance and study tips, improvement in knowledge of spinal cord injury treatment as well as gains in confidence to treat were observed after both courses; however, there was no advantage in the MOOC group. Additionally, Chen and coworkers [30] found no difference in scores on assignments between an online and an onsite version of a digital media course. Colvin et al [35] compared learning gains measured in their MOOC with learning gains in traditional settings; they found evidence of learning in the MOOC, in which scores were slightly higher than typical for a comparable lecture-based course but significantly lower than those seen in other courses with an “interactive engagement” component. In a finding that appears counter to the above, Rubio [38] found that improvement in language comprehensibility was greater in a MOOC compared to a face-to-face course. Finally, in their review, Rowe et al [17] looked specifically at the effectiveness of MOOCs in health professions education; they concluded that it cannot be said that MOOCs “enhance student learning” despite the proliferation of MOOCs and the “hype” about their potential. These contradictory findings suggest that when comparing MOOCs to other learning formats, the benefits of MOOCs remain unclear.

MOOCs were also expected to foster and build social networks. However, in reality, the amount of interaction among MOOC participants is often limited, and a small proportion of learners are usually responsible for most of this interaction. This finding was reinforced by the studies we examined [42,43,45]. However, there are social elements to MOOC participation, as discussed in the Outcomes Beyond Kirkpatrick section above. Joksimović and colleagues [14] proposed a model that may be a useful framework for illuminating some of the outcomes that do not readily fit with the Kirkpatrick framework. Their model considers social outcomes (along with academic and affective outcomes) in “immediate,” “course-level,” and “postcourse” settings. Since affective and social outcomes are evident in the studies critically analyzed here, it is worthwhile to consider them as benefits to MOOC participation, which may warrant additional research in its own right; the model proposed by Joksimović and colleagues [14] may be a useful starting point.

Methodological Considerations

Using a well-known model to frame and lens our findings, in this study, we explored one understudied aspect of MOOCs that provides a view of what learners can gain from MOOCs. The richness of data using an in-depth secondary analysis of a small number of studies from a systematic review with broad subject matter, combined with frequent debriefing sessions and investigator triangulation, enhanced the credibility of the findings. We argue that qualitatively synthesizing existing data in an attempt to make sense of contextually and methodologically diverse findings is an important contribution to the scholarly literature. There are also some limitations to this study. Synthesizing both quantitative and qualitative data is a daunting task, as these data derive from very different paradigms. Thus, an important factor limiting the applicability of our findings is the problem with extracting results from eclectic and dissimilar studies, including qualitative and quantitative methods and grey literature, and attempting to contrast and compare them. The findings should thus be interpreted with due caution in light of this fact. Further, as our work builds on a previous review, we included only studies that were included therein. This may leave out some relevant studies, despite the rigorous inclusion criteria of the previous review. Finally, despite the frequent scholarly use of the Kirkpatrick framework, there are some inherent limitations to the model that also have implications for this work. It has been argued that the four-level model depicts an oversimplified view of learning and training effectiveness that does not take individual or contextual influences into account in the evaluation of the learning that occurs [46]. Thus, using the Kirkpatrick framework deductively as in this study and assorting “contextual” data into predefined themes was challenging. Further, Kirkpatrick’s model assumes that the four levels denote a causal chain in which positive reactions lead to greater learning and training, yielding greater transfer and, consequently, more positive results. While the Kirkpatrick model is vague about the causal relationships between level outcomes, it does imply that a simple causal relationship exists between the levels in the model [47]. Finally, in this study, we examined data that were not congruent with the framework but which are nonetheless important to the discussion of MOOC outcomes. For example, when considering the outcomes reported in the studies we reviewed, we chose not to include outcomes we viewed as belonging to Kirkpatrick Level 1, Reaction. This level is usually reserved for outcomes that reflect a participant’s reaction to a particular program or training. Since this may include how the participants “feel” about the program in question, Level 1 outcomes can certainly include an affective state in relation to the training. We found some outcomes that we described as “affective,” which included “feelings” such as excitement and inspiration. However, these feelings did not refer to the MOOC (training) itself. Instead, the “excitement” and “inspiration” were feelings about the subject of the MOOC as a result of the MOOC, which does not seem to us to fall clearly within Kirkpatrick Level 1. We believe that these feelings may even fall under Kirkpatrick Level 2 in the “Attitude” category; however, we made the conservative decision to separate them. Whether these feelings are part of a Kirkpatrick framework would be an interesting topic for further inquiry.

Conclusions

Our findings point to some gains from MOOCs, and while we can expect MOOCs to persist, how learners benefit from the experience of participating in these courses remains unclear. This is especially true when comparing MOOCs to other learning modes, as evidenced by the comparative studies included in our sample. In our study, we looked for gains or benefits to MOOC learners in all subject areas, and we used the Kirkpatrick framework to explore what learners might gain. From a diverse set of studies, we found outcomes that included changes in knowledge, skills, attitude, and confidence as well as changes in behavior, increased excitement about a subject, and effects on cultural identity as a result of MOOC participation. Thus, beyond outcomes that can be classified as “learning,” such as increased knowledge or skill, it does appear that MOOCs provide some value for participants via the gains described above.

In contrast to systematic reviews of MOOC research, we carried out a deeper qualitative analysis of a set of studies from one systematic review that looked only at MOOC evaluation methods. Thus, as an extension of Alturkistani et al [21], we sought to identify MOOC outcomes that benefit the learner. With a qualitative investigation of a subset of studies on MOOC evaluation methods, we were able to apply the Kirkpatrick framework to identify a number of types of learner outcomes. However, as others have pointed out, the absence of systematic ways of measuring the benefits to learners is evident in our synthesis, and work remains to be done to determine the role of MOOCs and what they offer to participants and to the world.

Acknowledgments

We would like to thank Abrar Alturkistani, MPH and Edward Meinart, PhD for collaboration on the EIT Health project that inspired this paper. We also thank Hanna Augustsson, PhD for her expert review of our application of the Kirkpatrick model. All conclusions are the authors’ own. This work was partially funded by EIT Health (Grant 18654).

Authors' Contributions

ERB, TS, and PJP contributed to the conception, study design, data collection, analysis and interpretation, and drafting and critical revision of the manuscript. All authors approved the final version of the manuscript.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Description of the analytic procedure for the qualitative synthesis.

DOCX File , 18 KB

  1. Hollands FM, Tirthali D. Institute of Education Sciences. New York: Center for Benefit-Cost Studies of Education, Teachers College, Columbia University; 2014. MOOCs: Expectations and Reality. Full Report   URL: https://files.eric.ed.gov/fulltext/ED547237.pdf [accessed 2020-05-14]
  2. de Freitas SI, Morgan J, Gibson D. Will MOOCs transform learning and teaching in higher education? Engagement and course retention in online learning provision. Br J Educ Technol 2015 Apr 08;46(3):455-471. [CrossRef]
  3. Zhu M, Sari A, Lee MM. A systematic review of research methods and topics of the empirical MOOC literature (2014–2016). Internet High Educ 2018 Apr;37:31-39. [CrossRef]
  4. Veletsianos G, Shepherdson P. A Systematic Analysis and Synthesis of the Empirical MOOC Literature Published in 2013–2015. IRRODL 2016 Mar 01;17(2):198-221. [CrossRef]
  5. Blanco Á, García-Peñalvo F, Sein-Echaluce M. A methodology proposal for developing Adaptive cMOOC. 2013 Nov Presented at: Conference: Proceedings of the First International Conference on Technological Ecosystem for Enhancing Multiculturality; November 2013; Salamanca, Spain p. 553-558   URL: https:/​/www.​researchgate.net/​publication/​262167521_A_methodology_proposal_for_developing_Adaptive_cMOOC [CrossRef]
  6. Illeris K. A comprehensive understanding of human learning. In: Illeris K, editor. Contemporary Theories of Learning. London: Routledge; 2009:7-20.
  7. Mann K, Dornan T, Teunissen P. Perspectives on learning. In: Dornan T, Mann K, Scherpbier A, Spencer J, editors. Medical Education: Theory and Practice. Edinburgh: Churchill Livingstone Elsevier; 2011:17-38.
  8. Simons RJ, van der Linden J, Duffy T, editors. New Learning. Netherlands: Springer; 2000.
  9. Bayne S, Ross J. The Higher Education Academy. 2014. The pedagogy of the Massive Open Online Course (MOOC): the UK view   URL: https://www.advance-he.ac.uk/knowledge-hub/pedagogy-massive-open-online-course-mooc-uk-view [accessed 2020-05-14]
  10. Liyanagunawardena TR, Adams AA, Williams SA. MOOCs: A systematic study of the published literature 2008-2012. IRRODL 2013 Jul 05;14(3):202. [CrossRef]
  11. Liyanagunawardena TR, Williams SA. Massive open online courses on health and medicine: review. J Med Internet Res 2014 Aug 14;16(8):e191 [FREE Full text] [CrossRef] [Medline]
  12. Zhu M, Sari A, Lee MM. A systematic review of research methods and topics of the empirical MOOC literature (2014–2016). Internet High Educ 2018 Apr;37:31-39 [FREE Full text] [CrossRef]
  13. Deng R, Benckendorff P, Gannaway D. Progress and new directions for teaching and learning in MOOCs. Comput Educ 2019 Feb;129:48-60. [CrossRef]
  14. Joksimović S, Poquet O, Kovanović V, Dowell N, Mills C, Gašević D, et al. How Do We Model Learning at Scale? A Systematic Review of Research on MOOCs. Rev Educ Res 2017 Nov 14;88(1):43-86. [CrossRef]
  15. Bozkurt A, Akgün-Özbek E, Zawacki-Richter O. Trends and Patterns in Massive Open Online Courses: Review and Content Analysis of Research on MOOCs (2008-2015). IRRODL 2017 Aug 15;18(5):1-30. [CrossRef]
  16. Deng R, Benckendorff P. A Contemporary Review of Research Methods Adopted to Understand Students’ and Instructors’ Use of Massive Open Online Courses (MOOCs). Int J Inf Educ 2017;7(8):601-607. [CrossRef]
  17. Rowe M, Osadnik CR, Pritchard S, Maloney S. These may not be the courses you are seeking: a systematic review of open online courses in health professions education. BMC Med Educ 2019 Sep 14;19(1):356 [FREE Full text] [CrossRef] [Medline]
  18. Foley K, Alturkistani A, Carter A, Stenfors T, Blum E, Car J, et al. Massive Open Online Courses (MOOC) Evaluation Methods: Protocol for a Systematic Review. JMIR Res Protoc 2019 Mar 07;8(3):e12087. [CrossRef]
  19. Pilli O, Admiraal W. Students' Learning Outcomes in Massive Open Online Courses (MOOCs): Some Suggestions for Course Design. Yuksekogretim Derg 2017 Apr 29;7(1):46-71. [CrossRef]
  20. Reschly AL, Christenson SL. Jingle, jangle, and conceptual haziness: Evolution and future directions of the engagement construct. In: Christenson SL, Reschly AL, Wylie C, editors. Handbook of Research on Student Engagement. New York: Springer Science + Business Media; 2012:3-19.
  21. Alturkistani A, Lam C, Foley K, Stenfors T, Blum E, Van Velthoven MH, et al. Massive Open Online Course Evaluation Methods: Systematic Review. J Med Internet Res 2020 Apr 27;22(4):e13851 [FREE Full text] [CrossRef] [Medline]
  22. Literat I. Implications of massive open online courses for higher education: mitigating or reifying educational inequities? High Educ Res Dev 2015 May 05;34(6):1164-1177. [CrossRef]
  23. Hansen JD, Reich J. Democratizing education? Examining access and usage patterns in massive open online courses. Science 2015 Dec 04;350(6265):1245-1248. [CrossRef] [Medline]
  24. Reich J, Ruipérez-Valiente JA. The MOOC pivot. Science 2019 Jan 11;363(6423):130-131. [CrossRef] [Medline]
  25. Kirkpatrick DL, Kirkpatrick JD. Evaluating training programs: The Four Levels. Oakland: Berret-Koehler; 2006.
  26. Kirkpatrick Partners. 2019. The New World Kirkpatrick Model   URL: https://www.kirkpatrickpartners.com/Our-Philosophy/The-New-World-Kirkpatrick-Model [accessed 2020-05-15]
  27. Booth A. Searching for qualitative research for inclusion in systematic reviews: a structured methodological review. Syst Rev 2016 May 04;5:74 [FREE Full text] [CrossRef] [Medline]
  28. Hossain MS, Shofiqul Islam M, Glinsky JV, Lowe R, Lowe T, Harvey LA. A massive open online course (MOOC) can be used to teach physiotherapy students about spinal cord injuries: a randomised trial. J Physiother 2015 Jan;61(1):21-27 [FREE Full text] [CrossRef] [Medline]
  29. Clarke V, Braun V. Teaching thematic analysis: Overcoming challenges and developing strategies for effective learning. Psychologist 2013 Feb;26(2):120-123 [FREE Full text]
  30. Chen W, Jia J, Miao J, Wu X, Wang A, Yang B. Assessing Students' Learning Experience and Achievements in a Medium-Sized Massively Open Online Course. 2015 Presented at: IEEE 15th International Conference on Advanced Learning Technologies; 2015; Hualien. [CrossRef]
  31. Konstan JA, Walker JD, Brooks DC, Brown K, Ekstrand MD. Teaching Recommender Systems at Large Scale: Evaluation and Lessons Learned from a Hybrid Mooc. ACM Trans. Comput.-Hum. Interact 2015 Apr 15;22(2):1-23. [CrossRef]
  32. Jacquet GA, Umoren RA, Hayward AS, Myers JG, Modi P, Dunlop SJ, et al. The Practitioner’s Guide to Global Health: an interactive, online, open-access curriculum preparing medical learners for global health experiences. Medical Education Online 2018 Aug 07;23(1):1-8. [CrossRef]
  33. Liang D, Jia J, Wu X, Miao J, Wang A. Analysis of learners' behaviors and learning outcomes in a massive open online course. Knowledge Management & E-Learning 2014 Sep;6(3):281-298. [CrossRef]
  34. Cross S. Evaluation of the OLDS MOOC curriculum design course: participant perspectives, expectations and experiences. Milton Keynes: Open University; 2013.   URL: http://oro.open.ac.uk/37836/1/EvaluationReport_OLDSMOOC_v1.0.pdf [accessed 2020-05-15]
  35. Colvin KF, Champaign J, Liu A, Zhou Q, Fredericks C, Pritchard DE. Learning in an introductory physics MOOC: All cohorts learn equally, including an on-campus class. IRRODL 2014 Aug 15;15(4):1-21. [CrossRef]
  36. MacKay JR, Langford F, Waran N. Massive Open Online Courses as a Tool for Global Animal Welfare Education. J Vet Med Educ 2016 Jan;43(3):287-301. [CrossRef]
  37. Brunton J, Brown M, Costello E, Farrell O, Mahon C. Giving Flexible Learners a Head Start on Higher Education: Designing and Implementing a Pre-induction Socialisation MOOC. 2017 Presented at: European Conference on Massive Open Online Courses; 2017; Madrid. [CrossRef]
  38. Rubio F. Teaching pronunciation and comprehensibility in a language MOOC. In: Martín-Monje E, Bárcena E, editors. Language MOOCs: Providing Learning, Transcending Boundaries. Berlin: De Gruyter; 2015:143-160.
  39. Stephens M, Jones K. MOOCs as LIS Professional Development Platforms: Evaluating and Refining SJSU's First Not-for-Credit MOOC. Journal of Education for Library and Information Science 2014;55(4):345-361.
  40. Liu M, Kang J, Cao M, Lim M, Ko Y, Myers R, et al. Understanding MOOCs as an Emerging Online Learning Tool: Perspectives From the Students. Am J Distance Educ 2014 Aug 26;28(3):147-159. [CrossRef]
  41. Alturkistani A, Car J, Majeed A, Brindley D, Wells G, Meinert E. Determining The Effectiveness Of A Massive Open Online Course In Data Science For Health. 2018 Presented at: International Association for Development of the Information Society (IADIS) International Conference on e-Learning; 2018; Madrid.
  42. Mackness J, Waite M, Roberts G, Lovegrove E. Learning in a small, task–oriented, connectivist MOOC: Pedagogical issues and implications for higher education. IRRODL 2013 Sep 30;14(4):1-20. [CrossRef]
  43. Lei C, Hou X, Kwok T, Chan T, Lee J, Oh E. Advancing MOOC and SPOC development via a learner decision journey analytic framework. 2015 Presented at: IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE); 2015; Zhuhai. [CrossRef]
  44. Milligan C, Littlejohn A. Supporting professional learning in a massive open online course. IRRODL 2014 Oct 03;15(5):197-213. [CrossRef]
  45. Tawfik AA, Reeves TD, Stich AE, Gill A, Hong C, McDade J, et al. The nature and level of learner–learner interaction in a chemistry massive open online course (MOOC). J Comput High Educ 2017 Mar 8;29(3):411-431. [CrossRef]
  46. Bates R. A critical analysis of evaluation practice: the Kirkpatrick model and the principle of beneficence. Eval Program Plann 2004 Aug;27(3):341-347. [CrossRef]
  47. Holton EF. The flawed four-level evaluation model. Hum Resour Dev Q 1996;7(1):5-21. [CrossRef]


cMOOC: connectivist massive open online course
LMS: learning management system
MOOC: massive open online course
OOC: open online course
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
xMOOC: extended massive open online course


Edited by G Eysenbach; submitted 05.12.19; peer-reviewed by L Blakemore, A Giordano; comments to author 03.02.20; revised version received 21.02.20; accepted 23.03.20; published 08.07.20

Copyright

©Elizabeth R Blum, Terese Stenfors, Per J Palmgren. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 08.07.2020.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.