MOOC Quality: The Need for New Measures

Nina Hood and Allison Littlejohn

VOL. 3, No. 3

Abstract

MOOCs are re-operationalising traditional concepts in education. While they draw on elements of existing educational and learning models, they represent a new approach to instruction and learning. The challenges MOOCs present to traditional education models have important implications for approaching and assessing quality. This paper foregrounds some of the tensions surrounding notions of quality, as well as the need for new ways of thinking about and approaching quality in MOOCs.

Conceptualising MOOCs

Massive, open, online, courses (MOOCs) are online courses that facilitate open access to learning at scale. However, the interpretation and employment of the MOOC dimensions is not consistent, resulting in considerable variation in purpose, design, learning opportunities and access among different MOOC providers and individual MOOCs. The combinations of technology, pedagogical frameworks and instructional designs vary considerably between individual MOOCs. Some MOOCs reproduce offline models of teaching and learning, focusing on the organisation and presentation of course material while drawing on the Internet to open up these opportunities to a wider audience (Margaryan, Bianco, & Littlejohn, 2015). Others combine the opportunities presented by digital technologies with new pedagogical approaches and the flexibility of OER to design new learning experiences (Gilliani & Eynon, 2015).

The four dimensions of a MOOC – massive, open, online and course have been interpreted and implemented broadly:

Massive refers to the scale of the course and alludes to the large number of learners who participate in some MOOCs. Designing MOOCs involves considering how to disseminate content effectively and support meaningful interactions between learners (Downes, 2013) as well as how to devise new forms of education that enable high quality teaching and learning opportunities to occur at scale. Successful large-scale online education is expensive to produce and deliver (Ferguson & Sharples, 2014, 98).  Also learning ‘though mass public media’ is limited in its effectiveness for several reasons. First, learning usually requires a high degree of agency and self-regulation by the learner (Ferguson & Sharples, 2014, 98; Milligan, Littlejohn & Margaryan, 2013).  Second, learners are able to ‘drop in’ or ‘drop out’ of a MOOC, largely due to the open nature of courses where registration is open for the duration of the course. High dropout rates should be anticipated, since not all learners intend to complete the course or gain a certificate, bringing into question ‘drop out’ measures (Littlejohn & Milligan, 2015; Jordan, 2015). Third, MOOCs potentially attract diverse types of learners, which leads to complex design requirements, though the early MOOCs have tended to attract learners who have already participated in university education (Zhenghao et al., 2015). The large-scale access to learning MOOCs enable has implications not only for attracting and supporting large numbers of learners but also for designing the learning systems and developing the necessary pedagogy to support all of these different types of learners.

Open has multiple meanings in relation to MOOCs. It may refer to access; anyone, no matter his or her background, prior experience or current context may enrol in a MOOC. Open can also refer to cost; that is, a MOOC is available free of charge. A third meaning of open relates to the open nature of knowledge acquisition in a MOOC, including the employment of open educational resources (OER) or Open CourseWare (OCW) which is available under a Creative Commons licence. Open also relates to knowledge production and the opportunity for the remixing and reuse of the resources developed during a MOOC by the instructors and by the learners themselves to create new knowledge (Milligan, Littlejohn & Margaryan, 2013). Thus the philosophy of openness MOOCs were founded on is being challenged. The business models of platform providers, as well as the organisations that offer MOOCs, are experimenting with different pricing models. These include paying for certification, to sit a proctored exam, to receive course credit, or to work towards a degree (see for example http://tinyurl.com/zhmuuo6). The current open access model, which allows anyone to enrol in a MOOC, also is being challenged by the growing recognition that not everyone is adequately prepared with the necessary autonomy, dispositions and skills, to engage fully in a MOOC. The informal, largely self-directed nature of learning in MOOCs and the lack of support or interpersonal connections during a course, means that despite being open to anyone, learning opportunities are in reality restricted to those with the necessary knowledge, skills and dispositions to engage independently.

Online aspects of MOOCs increasingly are being blurred, as MOOCs are used in blended learning contexts to supplement in-person school and university classes (Bates, 2014; Bruff, Fisher, McEwen, & Smith, 2014; Caulfield, Collier, Halawa, 2013; Firmin et al., 2014; Holotescu, Grosseck, Cretu, & Naaji, 2014). In a review of the evidence surrounding the integration of MOOCs into offline learning contexts, Israel (2015) determines that while the blended approach leads to comparable achievement outcomes to traditional classroom settings, their use tended to be associated with lower levels of learner satisfaction. Downes (2013) suggests that for an online course to qualify as a MOOC no required element of the course should have to take place in a specific physical location. However, this requirement does not preclude additional offline interactions taking place. It is important to recognise that no online course is bounded to the online context. Learning is distributed across and informed by the multiple contexts of a learner’s life. How and why a learner engages with a MOOC is determined by both their current situation as well as their personal ontogeny. The learning context of a MOOC also is situated within and across the institutional contexts of the specific course creator and the platform provider. Recognising and addressing the multiple, and at times competing contexts in which each MOOC is situated is critical to discussions of quality.

Course conceptualisation varies across different MOOCs (Figure 1). According to Downes (2013) three criteria must be met for a MOOC to be regarded as a ‘course’: (1) it is bounded by a start and end date; (2) it is cohered by a common theme or discourse; and (3) it is a progression of ordered events. While MOOCs typically are bounded, this may be manifest in different ways. MOOCs initially started as structured courses, designed to parallel in-person, formal learning, such as university classes, with start and end dates. However, an increasing number of MOOCs are not constrained by specific start or end dates (Shah, 2015), facilitating a self-paced learning model. The length of courses also varies, with some constructed as a series of shorter modules, which may be taken independently or added together to form a longer learning experience. Patterns of learner engagement vary substantially in MOOCs. Conole (2013) suggests that participation can range from completely informal, with learners having the autonomy and flexibility to determine and chart their own learning journey, to engagement in a formal course, which operates in a similar manner to offline formal education. Reich (2013) has questioned whether a MOOC is a textbook (a transmitter of static content) or a course because of the conflicts that exist around confined timing and structured versus self-directed learning, the tension between skills-based or content-based objectives, and whether certification is included (or indeed achieved by learners). Siemens (2012) argues that the primary tension in MOOC conceptualisation is between the transmission model and the construction model of knowledge and learning. Rather than being viewed as a course, MOOCs could be conceptualised as a means by which learners construct and ultimately define their own learning (ibid). 

Thus, the term ‘MOOC’ is being applied to such a wide range of learning opportunities that it provides limited insight into the educational experience being offered. The specific nature and composition of individual MOOCs are profoundly shaped and ultimately the product of its designers and instructors, the platform and platform provider, and the participants, who each bring their own frames of reference and contextual frameworks. Therefore, any discussion or attempt to quantify or qualify notions of quality in MOOCs requires the exploration of the complexities and diversity in designs, pedagogies, purposes, teacher experiences and roles, and participant motivations, expectations and behaviours present in MOOCs (Ross, Sinclair, Knox, & Macleod, 2014; Mackness, Mak, & Williams, 2010; Milligan, Margaryan, & Littlejohn).

Quality Indicators: Presage, Process and Product Variables

Quality measures must take into consideration the diversity among MOOCs as well as the various, and often competing, frames of reference of different stakeholders – learners, instructors, organisations and governments. Dimensions of quality in education have been structured and organised using a model developed by Biggs (1993), the ‘3P model’ (Gibbs, 2010). The model conceptualises education as a complex set of interacting ecosystems. To understand how a particular ecosystem (i.e., a MOOC) operates or its impact, the course is broken down into its constituent parts to examine how these parts relate to each other and how they combine to form a whole. It further is necessary to understand each MOOC ecosystem in relation to other ecologies. Biggs (1993) provides a useful model to examine the variables that can be measured to assess the quality of learning (see Figure 1).

Figure 1: Bigg’s 3P Model

Biggs divides each learning ecosystem into three types of variables – presage, process and product variables. Presage variables are the resources and factors that go into the teaching and learning process, including the learners, instructors, institution, and in the case of MOOCs the platform and platform provider. Process variables refer to the processes and actions associated with the presage variables, including instructional design, pedagogical approaches, and learning resources and materials. Product variables are the outputs or outcomes of the educational processes.

Presage Variables: Provider and Instructor; Learner; Platform

Conventional measures of presage variables include student to staff ratios both across an institution as a whole as well as within individual courses, the quality of teaching staff (often measured by job role or teaching qualification), the allocation of teaching funding, and the prior qualifications of students entering an institution or the acceptance rate.

MOOCs disrupt these traditional measures. They are non-selective, with open admission, and are frequently designed to have a single instructor teaching thousands of learners. This has resulted in calls for quality measures that recognise the diversity of learners and the openness of a course (Butcher, Hoosen, Uvalić-Trumbić, Daniel, 2013; iNACOL, 2011; QM, 2013; Rosewell & Jansen, 2014). These measures have important implications for process and product variables.  

The MOOC platform plays an important role in determining the access, reach and nature of the course on offer. It further influences the instructional design, the technology that is available, and possible cost structures. Platforms are experimenting with new course structures, such as incorporating greater intentionality into course design by creating MOOCs with more practical outcomes for learners (Shah, 2015). Platform providers, such as Coursera, EdX and Futurelearn, are also experimenting with different cost structures, including offering pay-for credentialing and course credit opportunities and some providers have developed their own credentials.

The MOOC Provider can be anyone. The United States Government, the World Bank, the American Museum of Natural History, the Museum of Modern Art (New York), Google and  AT&T are some of the many organisations that have run MOOCs. Though to date most MOOCs have been created by instructional designers in universities –  first by a group of researchers in Canada (Downes, 2008; Downes, 2009), then by prominent institutions worldwide. This has led some commentators to suggest that MOOCs are merely an exercise in brand promotion (Conole, 2013). Others imply that MOOCs promote and reinforce distinctions between well-known, research universities and large corporations who are the producers of MOOCs (and controllers of knowledge), and less-affluent universities, which do not necessarily have the financial resources to produce MOOCs, and consequently are the consumers of MOOCs (Rhoads, Berdan, & Toven-Lindsey, 2013). Tensions and power imbalances between MOOC creators, the courses they develop, and the learning they support on the one hand, and learners on the other, is highlighted by many universities not offering credit for the MOOCs that they offer (Adamopoulos, 2013).

The use of high quality content resources and activities (Amo; 2013; Conole, 2013; Margaryan et al., 2015) and the opportunities for quality knowledge creation throughout the course of the MOOC (Guardia, Maina, & Sangra, 2013) are hallmarks of effective instructional design. Sound technology use is also important to the design and delivery of high quality learning experiences and opportunities (Amo, 2013; Conole, 2013; Guardia, Maina, & Sangra, 2013; Istrate & Kesten, 2015). However, Dillenbourg and colleagues (2014) warn of the tension between ‘edutainment’ and supporting deep learning, and the danger of providing ‘overpolished’ and entertaining materials without first considering the pedagogical approaches within which they are used. Research suggests the need for quality measures that evaluate both content and resource design and learner engagement with content and resources. There already exist a number of quality criteria that are used by universities for both accreditation and to maintain internal standards that could be extended, potentially in a modified form, to MOOCs (Dillenbourg et al., 2014). Examples of these frameworks that have been expanded to address MOOCs include the QM Quality Matters guide, iNAQOL, and OpenUpEd. These could be used in conjunction with new technology-enabled measures of learner engagement. One such example is the Precise Effectiveness Strategy, which purports to calculate the effectiveness of learners’ interactions with educational resources and activities (Munoz-Merino, Ruiperez-Valiente, Alario-Hoyos, Perez-Sanagustin, & Delgado Kloos, 2015).

Data suggest that the MOOC instructor has a significant impact on learner retention in MOOCs (Adamopoulos, 2013). Further research suggests that instructors’ participation in discussion forum activity and actively supporting learners during the running of a MOOC positively influences learning outcomes (Coetzee, Lim, Fox, Hartman, & Hearst, 2011; Deslauriers, 2011). Ross et al. (2014) argue for the importance of acknowledging the complexity of teacher positions and experiences in MOOCs and how these influence learner engagement.

Although there were around 35 million MOOC Learner registrations in 2015 (Shah, 2015), data suggest that MOOCs currently are not attracting as diverse a body of learners as hoped, with most learners having a degree level qualification (Christensen et al, 2013; Ho et al., 2014). However, there is considerable variety in learners’ motivations for enrolling in a MOOC. Common factors include: interest in the topic, access to free learning opportunities, the desire to refresh knowledge, opportunity to draw on world-class university knowledge, and to gain accreditation (Davies et al., 2014; Winthrup et al., 2015). Christensen et al. (2013) found that nearly half of MOOC students reported their reason for enrolling as “curiosity, just for fun”, while 43.9% cited the opportunity to “gain skills to do my job better.” Motivation determines how a person engages with a learning opportuniy both cognitively and behaviorally, and therefore, is a mediating factor in relation to other quality measures.

Low MOOC completion rates have been viewed as problematic. However passive engagement can be considered a valid learning approach, and is not always indicative of a lack of learning (Department for Business, Innovation and Skills, 2013). The majority of learners in MOOCs are not adhering to traditional expectations or learning behaviours. Consequently, they do not necessarily measure success as engaging with all of the content or completing the activities and achieving a certification of completion (Littlejohn, Hood, Milligan, & Mustain, 2016). Successful learning in MOOCs increasingly is learner driven and determined. As a result, traditional quality measures related to outcome variables (such as completion rates or grades) may be of limited relevance to MOOCs (Littlejohn & Milligan, 2015).

Confidence, prior experience and motivation have been found to mediate engagement (Milligan, Littlejohn, & Margaryan, 2013). It further has been suggested that learners’ geographical location affects accessibility to MOOCs as well as interest in topics (Liyanagunawardena, Adams, & Williams, 2013), with demographic information able to be used as an intermediary characteristic to explain behaviour in a MOOC (Skrypnyk, Hennis, & Vries, 2014). Further research has identified a relationship between learners’ behaviour and engagement, and their current contexts, including occupation, (Hood, Littlejohn, & Milligan, 2015; Wang & Baker, 2015; de Waard et al., 2011), as well as a relationship between learners’ learning objectives and their learning outcomes (Kop, Fournier, & Mak, 2011). Learners’ prior education experience also has been found to influence their retention in a MOOC (Emanuel, 2013; Koller et al., 2013; Rayyan, Seaton, Belcher, Pritchard, & Chuang, 2013) and their readiness to learn (Bond, 2015; Davis, Dickens, Leon, del Mar Sanchez Ver, & White, 2014; Kop et al. 2011), with more experienced learners typically finding it easier to navigate the unstructured nature of learning in a MOOC (Lin, Lin, & Hung, 2015).

When discussing and assessing quality in MOOCs it is necessary to situate the MOOC, the learning opportunities it provides and individual learners within the multiple ecosystems in which they interact. One of the disrupting forces in a MOOC is that it provokes a move in thinking about quality from the perspective of the instructor, institution and platform provider to the learner. Therefore, establishing reliable measures of confidence, experience and motivation, which extend beyond self-report, could provide a more accurate view of quality than conventional learner metrics.

Process Variables – Pedagogy and Instructional Design

The flexibility of participation and the self-directed nature of engagement, which enables learners to self select the learning opportunities and pathways they follow when participating in a MOOC (de Boer et al., 2014) necessitates the re-operationalisation of many process variables. Questions emerge regarding the balance between structure (intended to provide direction) and self-regulation, between broadcast or dialogue models of delivery, whether MOOCs should offer edutainment or deep learning opportunities, and whether and how to promote homophily or diversity in learners’ engagement and participation.

MOOC Instructional Design and the use of different tools and resources influence engagement and support learning in MOOCs (Margaryan et al, 2015). Outcomes measures of retention and completion are often used as proxies for learning when assessing process variables. However, these are not necessarily accurate measures of learning in MOOCs, where participation is often self-directed, with learners following individual, asynchronous pathways for which there is no correct or prescribed route (de Boer et al., 2014).   The diversity of learners’ goals and motivations for taking a MOOC must be addressed within its instructional design, allowing for learner autonomy (Mackness, Waite, Roberts, & Lovegrove, 2013) and flexible learning patterns. However, this flexibility must be situated within an overarching, coherent design, which incorporates adequate support structures. Daradoumis and colleagues (2013) and Margaryan et al (2015) found that while MOOCs allow for individual learning journeys, there is a problematic lack of designed customisation and personalisation in MOOCs, which responds to learner characteristics. Designing a MOOC based on participatory design and activity-based learning facilitates learning that is relevant to learners (Hew, 2014; Istrate & Kestens, 2015; Mor & Warburton, 2015).

There are strong links between the diversity of learners (presage variable) MOOCs can attract and the need to incorporate differentiated pathways and learner-centred designs. Learner-centred design takes into consideration the diversity of the learner population and the need to provide learning activities that cater to and support different learning styles and needs (Alario-Hoyos, Perez-Sanagustin, Cormier, & Delgado-Kloos, 2014; Guardia, Maina, & Sangra, 2013; Hew, 2014; Margaryan et al., 2015). The design should offer opportunities for personalised learning (Istrate & Kestens, 2015) as well as and drawing on learners’ individual contexts and previous experience  (Scagnoli, 2012).

It also is important to support and scaffold learning, by making sure support structures are integrated into the MOOC design (Skrypnyk, de Vies, & Hennis, 2015). Learning supports can be developed through the incorporation of accessible materials and instructors who actively contribute to and support learners (Hew, 2014), as well as through opportunities for peer assistance (Amo, 2013; Guardia et al., 2013). However, the ration of instructors to learners in MOOCs raises concern (Dolan, 2014; Kop, Fournier, & Mak, 2011). Learning and data analytics increasingly are being used to guide the learner and instructor, with tutors receiving predictive analytics about each of their students and using this data to target their support (Rientes et al., 2016) or learners being ‘nudged’ to focus attention (Martinez, 2014).  

Interaction and collaboration encompass both instructor-learner interactions and learner-to-learner collaborations. A relationship has been identified between learners’ participation in discussion forums and completion (Gillani & Eynon, 2014; Kizilcic et al., 2013; Sinha et al., 2014), though the reasons for this are uncertain. Analysis of discussion forum posts indicates a wide variation in the content and topics (Gillani et al., 2014). However, a correlation has been detected between the intensity of activity and course milestones (ibid). Higher performing students engage more frequently in MOOC discussion forums; however, their interactions are not restricted to other high-performing students. Discussion forums also provide an important information source for instructors about their students and how they are engaging with the content (Rosé, Goldman, Zoltners Sherer, & Resnick, 2015).     

Opportunities for strategic use of feedback (from both peers and instructors) are important elements of effective instructional design (Alario-Hoyos, 2014; Amo, 2013; Conole, 2013; Margaryan et al., 2015). Receiving targeted, relevant informative feedback in a timely manner is important for supporting students’ learning (Hattie, 2009). However, in their analysis of 76 MOOCs Margaryan and colleagues (2015) found there were few opportunities for high quality instructor feedback. There is evidence of the predictive power of data and learning analytics to offer insight into learning (Tempelaar, Rientes, & Giesbers, 2015). New techniques are being developed, including technology for analysing discussions for learning (Howley, Mayfield, & Rosé, 2013), the formation of discussion groups (Yang, Wen, Kumar, Xing, & Rosé), and indicators of motivation, cognitive engagement and attitudes towards the course (Wen, Yang, & Rosé, 2014a, 2014b). Developing measures capable of capturing interactions quantitatively as well as qualitatively, will facilitate a richer understanding of how interactions and collaboration support student learning and engagement, as well as how they contribute to the fulfilment of individual learners’ goals.  Research has investigated how formative and summative feedback can be generated (Whitelock, Gilbert, & Wills, 2013), how MOOCs could operate as foundational learning experiences before traditional degree courses (Wartell, 2012) and how and whether university credit might be offered by more MOOCs (Bellum 2013; Bruff, Fisher, McEwan, & Smith, 2013).

Learning analytics could be used to better personalise and tailor MOOCs to learners (Daradoumis, Bassi, Xhafa, & Caballe, 2013; Kanwar, 2013; Lackner, Ebner & Khalil, 2015; Sinha et al., 2014; Tabba & Medouri, 2013). Developing quality indicators that can be used in conjunction with learning analytics could provide powerful measures of pedagogically effective technology use in MOOCs.

Product Variables – Learners and Learning

In conventional education the most commonly used indicators of learning quality are progression and completion rates and employment statistics (Gibbs, 2010). However, the use of these indicators as MOOC quality measures is highly problematic, since completion is not always the goal of individual learners (Littlejohn et al., 2016) and therefore not an appropriate measure of the quality of learning on its own.

Particular learner behaviours – engagement in discussion forums (Gillani et al., 2014), completion of weekly quizzes (Admiraal et al., 2015), and routine engagement over the course of a MOOC (Loya, Gopal, Shukla, Jermann, & Tormey, 2015; Sinha et al., 2014) correlate positively with completion levels. As such they can be interpreted as facilitators of the learning process. However, completion is not synonymous with satisfaction, the achievement of goals, or learners’ perceptions of successful learning (Koller, Ng, Do, & Chen, 2013; Littlejohn et al., 2016; Wang & Baker, 2015). Further evidence indicate that learners who ‘lurk’, engage passively, or do not complete the full course have as high overall experiences of a MOOC as those learners who completed it (Kilicec et al., 2013; Milligan, et al, 2013).

There is a need to measure other product variables that reflect the diverse and contextualised patterns of participation and the range of outcomes in MOOCs. These should include the individual motivations and goals of learners, both as they are conceptualised at the start of a course as well as how they develop over time. This will enable the development of differentiated product variables as well as enabling the tracking of individual learner’ engagement with MOOC resources, assessment – both formative and summative – and feedback, interaction with others, and patterns of communication. This profile of individual learners should also include background information on learners, including demographic data, prior learning experiences, and behavioural data.

Using learning analytic techniques to analyse combinations of demographic details, academic and social integration, and social and behavioural factors, together with within course behaviour can predict different types of performance (Agudo-Peregrina et al., 2014; Credé & Niehorster, 2012; Marks, Sibley, & Arbaugh, 2005; Macfadyen & Dawson, 2010; Tempelaar et al., 2015). Other useful product variables include post-MOOC outcomes, such as career progression (Zheng et al., 2015), network outcomes and future study.

Indicators of MOOC Quality

Dimensions of MOOC quality depend largely on two variables: the MOOC’s purpose and the perspective of the particular actor. The diversity of learners in a MOOC, the range of purposes for which MOOCs are designed, and the various motivations individual learners have for engaging with a MOOC means that it is not possible to identify a universal approach for measuring quality. Furthermore, the difficulties in operationalising many of the dimensions of quality – either quantitatively or qualitatively – makes assessments of quality challenging.

Daniel (2012) suggests that MOOCs could be evaluated by learners and educators, with the aim of producing league tables that rank courses (there are several examples of this already happening). He suggests that poorly performing courses would either disappear due to lack of demand, or would undertake efforts to improve quality. Uvalić-Trumbić (2013) suggests assessing MOOCs against the question ‘What is it offering to the student?’. However, given the diversity among MOOC participants, the answer to this question would differ for each student.

Another route forward is to equate quality with participation measures (Dillenbourg et al., 2013). The primary focus would be on assessments of the learning outcomes of individual participants, thereby placing the learner at the centre of measures of quality. This is in keeping with the growing focus in the research on developing multiple measures of learner behaviour, motivations and engagement, through the employment of various learning and data analytic techniques. Dillenbourg and colleagues (2013) suggest that multiple assessments of individual learners participation measures could also inform the evaluation of cohorts of learners, instructional design decisions and the learning outcomes that result from them, and instructors, whose quality is dependent on outcomes of the course.

This focus on the learner and the relationship between product and process variables seems to be central to the quality of MOOCs. If the learning outcomes – as measured through a range of variables and indicators – are perceived to represent a high quality learning experience, then by implication the process variables – the various dimensions of pedagogy and instructional design – are appropriate in this context. However, if the learners’ experiences and resulting learning outcomes are not positive, then the process variables may be deemed as less suitable to that particular context, even if conforming to a pre-developed list of guidelines. The aim, therefore, is to ensure that the gap between initial expectations and the final perceptions of the delivered learning experience is as small as possible. That is, the process variables lead to the desired product variables and outcome measures.   

Rather than coming a single conclusion about quality in MOOCs, this paper has attempted to explore some of the tensions and challenges associated with quality and to identify a range of variables that can be used to measure quality in MOOCs. It is clear that conventional measures and indicators of quality are not always appropriate for MOOCs. Similarly, given the diversity among MOOC offerings, it is unlikely that there is one clear route forward for assessing quality. Biggs’ 3P model provides a framework for identifying variables and measures associated with quality and for exploring the relationships between them. The aim here was to explain the possible uses of each variable, and where possible, to identify potential measures and instruments that can be used to measure them.

Quality is not objective. It is a measure for a specific purpose. In education, purpose is not a neutral or constant construct. The meaning and purpose ascribed to education shifts depending on the context and the actor, with governments, institutions, instructors, and learners approaching education from different viewpoints and consequently viewing quality through different lenses.  Since MOOCs shift agency towards the learner, there is a need to foreground learner perspectives, using various measures of learner perceptions, behaviours and actions, and experiences as the foundation for assessing quality.

Acknowledgements

The authors extend their thanks to the Commonwealth of Learning and Dr Sanjaya Mishra for supporting the project on “Developing quality guidelines and a framework for quality assurance and accreditation of Massive Open Online Courses”.  Full reports of the project are published as two separate publications by COL, which can be accessed at http://oasis.col.org

References

  1. Adamopoulos, A. (2013). What makes a great MOOC? An interdisciplinary analysis of student retention in online courses. Thirty Fourth International Conference on Information Systems, Milan 2013.
  2. Admiraal, W., Huisman, B., & Pilli, O. (2015). Assessment in Massive Open Online Courses. Journal of e-Learning, 13(4), 207-216.
  3. Agudo-Peregrina, Á., Iglesias-Pradas, S., Conde- González, M., & Hernández-García, Á. (2014). Can We Predict Success from Log Data in Vles? Classification of Interactions for Learning Analytics and their Relation with Performance in VLE-Supported F2F and Online Learning. Computers in Human Behavior, 31, 542–550.
  4. Alario-Hoyos, C., Perez-Sanagustin, M., Cormier, D., & Delgado-Kloos, C. (2014). Proposal for a Conceptual Framework for Educators to Describe and Design MOOCs. Journal of Universal Computer Science, 20(1), 6-23.
  5. Amo, D. (2013). MOOCs: Experimental Approaches For Quality in Pedagogical and Design Fundamentals. TEEM '13, November 14 - 15 2013, Salamanca, Spain. 
  6. Arbaugh, J. B. (2014). System, Scholar, or Students? Which Most Influences Online MBA Course Effectiveness? Journal of Computer Assisted Learning, 30(4).
  7. Bellum, J. (2013). The Adult Learner and MOOCs. EDUCAUSE Review. Available from: http://www.educause.edu/ero/article/adult-learner-and-moocs
  8. Biggs, J. (1993). From theory to practice: A cognitive systems approach. Higher Education Research & Development, 12(1), 73-85,
  9. Bond, P. (2015) Information Literacy in MOOCs. Current Issues in Emerging Elearning, 2(1), article 6.
  10. Bruff, D. O., Fisher, D. H., McEwen, K. E., & Smith, B. E. (2013). Wrapping a MOOC: Student Perceptions of an Experiment in Blended Learning. Journal of Online Learning and Teaching, 9(2), 187-199.
  11. Butcher, N., Hoosen, S., Uvalić-Trumbić, S., & Daniel, J. (2013). Guide to quality in post-traditional online higher education. Available from: http://www.eadtu.eu/home/policy-areas/quality-assurance/publications/227-guide-to-quality-in-post-traditional-online-higher-education.
  12. Caulfield, M., Collier, A., & Halawa, S. (2013, October 7). Rethinking Online Community in Moocs Used for Blended Learning. EDUCAUSE Review. Retrieved from http://www.educause.edu/ero/article/rethinking-online-community-moocs-used- blended-learning
  13. Chandrasekaran, M., Ragupathi, K., Kan, M., & Tan, B. (2015). Towards Feasible Instructor Intervention in MOOC Discussion Forums. Thirty Sixth International Conference on Information Systems, Fort Worth 2015.
  14. Christensen, G., Steinmetz, A., Alcorn, B., Bennett, A., Woods, D., & Emanuel, E. J. (2013). The MOOC phenomenon: Who takes Massive Open Online Courses and why? Available from: http://ssrn.com/abstract=2350964
  15. Conole, G. (2008). New schemas for mapping pedagogies and technologies. Available from: http://www.ariadne.ac.uk/issue56/conole
  16. Conole, G. (2013). MOOCs as Disruptive Technologies: Strategies for Enhancing the Learner Experience and Quality of MOOCs. RED - Revista de Educación a Distancia, 39. Retrieved from http://www.um.es/ead/red/39/conole.pdf
  17. Daniel, J. (2012). Making Sense of MOOCs: Musings in a Maze of Myth, Paradox and Possibility. Journal of Interactive Media in Education, 2012(3). Doi: http://doi.org/10.5334/2012-18.
  18. Daradoumis, T., Bassi, R., Xhafa, F., & Caballe, S. (2013). A Review on Massive E­Learning (MOOC) Design, Delivery and Assessment. In Proceedings - 2013 8th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing, 3PGCIC 2013 (pp. 208-213).Doi: http://doi.org/tpk
  19. Davis, H., Dickens, K., Leon, M., del Mar Sanchez Ver, M., & White, S. (2014). MOOCs for Universities and Learners: An Analysis of Motivating Factors. In 6th International Conference on Computer Supported Education, 01- 03 Apr 2014.
  20. de Boer, J., Ho, A., Stump, G., & B. (2014) Changing “Course”: Reconceptualizing Educational Variables for Massive Open Online Courses. Educational Researcher, 1-11. DOI: 10.3102/0013189X14523038.
  21. Department for Business, Innovation & Skills. (2013). The maturing of the MOOC. London: Department for Business, Innovation and Skills.
  22. Deslauriers, L., Schelew, E., & Wieman, C. (2011). Improved Learning in a Large-Enrolment Physics Class. Science, 332(6031), 862-864.
  23. de Waard, I., Abajian, S., Gallagher, M, Hogue, R., Keskin, N., Koutropoulos, A., & Rodriguez, O. (2011). Using mLearning and MOOCs to understand chaos, emergence, and complexity in education. International Review of Research in Open and Distance Learning, 12(7), pp. 94-115.
  24. Dillenbourg, P., Fox, A., Kirchner, C., Mitchell, J., & Wirsing, M. (2013). Massive Open Online Courses: Current state and perspectives. Manifesto from Dagstuhl Perspectives Workshop. Doi: 10.4230/DagMan.4.1.1.
  25. Dolan, V. (2014). Massive Online Obsessive Compulsion: What are They Saying Out There about the Latest Phenomenon in Higher Education? International Review of Research in Open and Distributed Learning, 15(2).
  26. Downes, S. (2008). Places to go: Connectivism & Connective Knowledge. Innovate: Journal of Online Education, 5(1). Retrieved from http://www.innovateonline.info/pdf/vol5_issue1/Places_to_Go-__Connectivism_&_Connective_Knowledge.pdf
  27. Downes, S. (2009, February 24). Connectivist dynamics in communities [Web log post]. Retrieved from http://halfanhour.blogspot.co.uk/2009/02/connectivist-dynamics-in-communities.html
  28. Downes, S. (2013). The quality of Massive Open Online Courses. Available from: http://mooc.efquel.org/files/2013/05/week2-The-quality-of-massive-open-online-courses-StephenDownes.pdf.
  29. Emanuel, E. (2013). Online Education: MOOCs Taken by Educated Few. Nature, 503. doi:10.1038/503342a.
  30. Firmin, R., Schiorring, E., Whitmer, J., Willett, T., Collins, E. D., & Sujitparapitaya, S. (2014). Case study: Using MOOCs for Conventional College Coursework. Distance Education, 35(2), 178-201. doi: 10.1080/01587919.2014.917707
  31. Gibbs, G. (2010). Dimensions of quality. York: The Higher Education Academy.
  32. Gillani, N., & Eynon, R. (2014). Communication Patterns in Massively Open Online Courses. Internet and Higher Education, 23, 18-26.
  33. Gillani, N., Yasserie, T., Eynon, R., & Hjorth, I. (2014). Structural Limitations of Learning in a Crowd: Communication Vulnerability and Information Diffusion in MOOCs. Scientific Insights, 4: 6447. 
  34. Guardia, L., Maina, M., & Sangra, A. (2013). MOOC Design Principles. A Pedagogical Approach from the Learner’s Perspective. eLearning Papers, 33, 1-5.
  35. Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. London and New York: Routledge.
  36. Hew, K. (2014). Promoting Engagement in Online Courses: What Strategies Can we Learn from Three Highly Rated MOOCS? British Journal of Educational Technology, 47(2), 320-342. doi:10.1111/bjet.12235.
  37. Ho, A. D., Reich, J., Nesterko, S. O., Seaton, D. T., Mullaney, T., Waldo, J., & Chuang, I. (2014). HarvardX and MITx: The first year of open online courses (HarvardX and MITx Working Paper No. 1). doi:10.2139/ssrn.2381263
  38. Holotescu, C., Grosseck, G., Cretu, V., & Naaji, A. (2014). Integrating MOOCs in Blended Courses. Proceedings of the International Scientific Conference of eLearning and Software for Education, Bucharest, 243-250. doi: 10.12753/2066-026X-14-034
  39. Hood, N., Littlejohn, A., & Milligan, C. (2015). Context Counts: How Learners' Contexts Influence Learning in a MOOC. Computers & Education, 91, 83-91.
  40. Howley, I., Mayfield, E., & Rosé, C. P. (2013). Linguistic Analysis Methods for Studying Small Groups. In Cindy Hmelo-Silver, Angela O’Donnell, Carol Chan, & Clark Chin (Eds.), International handbook of collaborative learning. Taylor and Francis, Inc.
  41. Israel, M. (2015). Effectiveness of Integrating MOOCs in Traditional Classrooms for Undergraduate Students. International Review of Research in Open and Distributed Learning, 16(5), 102-118.
  42. Istrate, O., & A. Kestens (2015). Developing and monitoring a MOOC: The IFRC Experience. The 11th International Scientific Conference eLearning and Software for Education, Bucharest, April 23-24, 2015. Doi 10.12753/2066-026X-15-179.
  43. Jordan, K. (2015). Massive Open Online Course Completion Rates Revisited: Assessment, Length and Attrition. International Review of Research in Open and Distributed Learning, 16(3), 341-358.
  44. Kanwar, A. (2013, October 16). Quality vs Quantity: Can Technology Help? Opening Keynote, 25th ICDE World Conference, Tianjin, China.
  45. Koller, D., Ng, A., Do, C., & Chen, Z. (2013). Retention and Intention in Massive Open Online Courses: In depth. EDUCAUSE Review Online. Available from:  http://er.educause.edu/articles/2013/6/retention-and-intention-in-massive-open-online-courses-in-depth.
  46. Kop, R., Fournier, H., & Mak, J. (2011). A Pedagogy of Abundance or a Pedagogy to Support Human Beings? Participant Support on Massive Open Online Courses. International Review of Research in Open and Distributed Learning, 12, 74-93.
  47. Lin, Y-L., Lin, H-W., & Hung, T-T. (2015). Value Hierarchy for Massive Open Online Courses. Computers in Human Behaviour, 53, 408-418.
  48. Liyanagunawardena, T., Adams, A., & Williams, S. (2013). MOOCs: A Systematic Study of the Published Literature 2008-2012. International Review of Research in Open and Distributed Learning, 14(3), 202-227.
  49. Littlejohn, A., Hood, N., Milligan, C., & Mustain, P. (2016). Learning in MOOCs: Motivations and Self-Regulated Learning in MOOCs. Internet and Higher Education, 29, 40-48.
  50. Littlejohn, A., & Milligan, C. (2015). Designing MOOCs for Professional Learners: Tools and Patterns to Encourage Self-regulated Learning. eLearning Papers, Special Issue on Design Patterns for Open Online Teaching and Learning, 42. Available from: http://www.openeducationeuropa.eu/en/node/170924
  51. Mackness, J., Mak, S. F. J., & Williams, R. (2010). The Ideals and Reality of Participating in a MOOC. In L. Dirckinck-Holmfeld, V. Hodgson, C. Jones, M. de Laat, D. McConnell, & T. Ryberg. (Eds.), Proceedings of the Seventh International Conference on Networked Learning (pp. 266-275). Lancaster, UK: University of Lancaster.
  52. Mackness, J., Waite, M., Roberts, G., & Lovegrove, E. (2013). Learning in a Small, Task–oriented, Connectivist MOOC: Pedagogical Issues and Implications for Higher Education. International Review of Research in Open and Distributed Learning, 14(4), 140-159.
  53. Margaryan, A., Bianco, M., & Littlejohn, A. (2015). Instructional Quality of Massive Open Online Courses (MOOCs). Computers & Education, 80, 77-83.
  54. Marks, R. B., Sibley, S. D., & Arbaugh, J. B. (2005). A Structural Equation Model of Predictors for Effective Online Learning. Journal of Management Education, 29(4), 531-563.
  55. Martinez, I. (2014). The Effects of Nudges on Students’ Effort and Performance: Lessons from a MOOC. Working Paper, EdPolicyWorks. Retrieved from: http://curry.virginia.edu/uploads/resourceLibrary/19_Martinez_Lessons_from_a_MOOC.pdf.
  56. Milligan, C., Littljohn, A., & Margaryan, A. (2013). Patterns of Engagement in Connectivist MOOCs. MERLOT, 9(2), 149-159.
  57. Munoz-Merino, P., Ruiperez-Valiente, J., Alario-Hoyos, C., Perez-Sanagustin, M., & Delgado Kloos, C. (2015). Precise Effectiveness Strategy for Analyzing the Effectiveness of Students with Educational Resources and Activities in MOOCs. Computers in Human Behaviour, 47, 108-118.
  58. Quality Matters. (2013). Quality Matters continuing and professional education rubric standards. Available from: https://www.qmprogram.org
  59. Rayyan, S., Seaton, D., Belcher, J., Pritchard, D., & Chuang, I. (2013). Participation and performance In 8.02x Electricity And Magnetism: The First Physics MOOC From MITx.  arXiv preprint arXiv:1310.3173. Available from: http://arxiv.org/abs/1310.3173.
  60. Reich, J. (2013). MOOC Completion and Retention in the Context of Student Intent. EDUCAUSE Review. Available from: http://er.educause.edu/articles/2014/12/mooc-completion-and-retention-in-the-context-of-student-intent.
  61. Rientes, B., Boroowa, A., Cross, S., Kubiak, C., Mayles, K., & Murphy, S. (2016). Analytics4Action Evaluation Framework: A Review of Evidence-Based Learning Analytics Interventions at the Open University UK. Journal of Interactive Media in Education. Doi: http://dx.doi.org/10.5334/jime.394
  62. Rhoads, R.A., Berdan, J. & Toven-Lindsey, B. (2013). The Open Courseware Movement in Higher Education: Unmasking Power and Raising Questions about the Movement’s Democratic Potential. Educational Theory, 63(1), pp. 87-110.
  63. Rosé, C., Goldman, P., Zoltners Sherer, J., & Resnick, L. (2015). Supportive technologies for group discussion in MOOCs. Current Issues in Emerging eLearning, 2(1), Article 5.
  64. Rosewell, J., & Jansen, D. (2014). The OpenupEd Quality Label: Benchmarks for MOOCs. INNOQUAL: The International Journal for Innovation and Quality in Learning, 2(3), 88-100.
  65. Ross, J., Sinclair, C., Knox, J., & Macleod, H. (2014). Teacher Experiences and Academic Identity: The Missing Components of MOOC Pedagogy. Journal of Online Learning and Teaching10(1), 57.
  66. Shah, D. (2015, December 28). MOOCs in 2015: Breaking Down the Numbers. EdSurge. Available from: https://www.edsurge.com/news/2015-12-28-moocs-in-2015-breaking-down-the-numbers.
  67. Siemens, G. (2012). MOOCs are really a platform. ELearnSpace. Available at: http://www.elearnspace.org/blog/2012/07/25/moocs-are-really-a-platform/.
  68. Sinha, T., Li, N., Jermann, P., & Dillenbourg, P. (2014). Capturing “attrition intensifying” Structural Traits from Didactic Interaction Sequences of MOOC Learners.  Proceedings of the 2014 Empirical Methods in Natural Language Processing Workshop on Modeling Large Scale Social Interaction in Massively Open Online Courses.
  69. Skrypnyk, O., de Vries, P., & Hennis, T. (2015). Reconsidering Retention in MOOCs: The Relevance of Formal Assessment and Pedagogy. EMOOCS Conference 2015, Third European MOOCs Stakeholders Summit, Mons, Belgium.
  70. Tabba Y., & Medouri, A. (2013). LASyM: A Learning Analytics System for MOOCs.  International Journal of Advanced Computer Science and Applications, 4(5), 113-119.
  71. Tempelaar, Dirk T., Rienties, B., & Giesbers, B. (2015). In Search for the Most Informative Data for Feedback Generation: Learning Analytics in a Data-Rich Context. Computers in Human Behavior, 47, 157-167.
  72. Uvalić-Trumbić, S. (2013). MOOCs – Mistaking brand for quality? University World News. Retreived from: http://www.universityworldnews.com/article.php?story=20130206180425691
  73. Wang Y., & Baker, R. (2015). Content or platform: Why do students complete MOOCs?. MERLOT, 11(1), 17-30.
  74. Warburton, S., & Mor, Y. (2015). Configuring Narratives, Patterns and Scenarios in the Design of Technology Enhanced Learning. In M. Maina et al. (Eds.), The art & science of learning design, 93-104.
  75. Wartell, M. (2012). A New Paradigm for Remediation: MOOCs in Secondary Schools. EDUCAUSE Review. Retrieved from: http://er.educause.edu/articles/2012/11/a-new-paradigm-for-remediation-moocs-in-secondary-schools.
  76. Wen, M., Yang, D., & Rosé, C. P. (2014a). Sentiment Analysis in MOOC Discussion Forums: What does it tell us? Proceedings of Educational Data Mining. Available from: http://www.cs.cmu.edu/~mwen/papers/edm2014-camera-ready.pdf
  77. Wen, M., Yang, D., & Rosé, D. (2014b). Linguistic Reflections of Student Engagement in Massive Open Online Courses. In Proceedings of the International Conference on Weblogs and Social Media. Available from: http://www.cs.cmu.edu/~mwen/papers/icwsm2014-camera-ready.pdf.
  78. Whitelock, D., Gilbert, L., & Wills, G. (2013). Feedback generators: providing feedback in MOOCs. In CAA 2013 International Conference. University of Southampton. Retrieved from: http://caaconference.com
  79. Yang, D., Wen, M., Kumar, A., Xing, E., & Rosé, C. (2014). Towards an integration of text and graph clustering methods as a lens for studying social interaction in MOOCs. International Review of Research in Open and Distributed Learning, 15(5).
  80. Zhenghao, C., Alcorn, B., Christensen, G., Eriksson, N., Koller, D., & Emanuel, E. (2015, September). Who’s benefiting from MOOCs, and Why? Harvard Business Review.

Authors

Dr. Nina Hood is a research fellow at the Faculty of Education at the University of Auckland. Her research is focused on the role that digital technologies can play in supporting and enhancing education, and in particular facilitating professional learning opportunities and knowledge mobilisation. Email: n.hood@auckland.ac.nz

Dr. Allison Littlejohn is Professor of Learning Technology and Academic Director for Digital Innovation at The Open University, UK. She has held Professorships at three UK Universities and academic or related positions in the UK and US. Professor Littlejohn’s research has been published in over 200 academic articles, including four books. She has been Principal Investigator or Senior Scientist on around 50 research projects funded by the European Commission (EC), the UK Economic and Social Research Council (ESRC), the Bill & Melinda Gates Foundation, the Higher Education Funding Council for England (HEFCE), the Scottish Funding Council (SFC), the UK Joint Information Systems Committees (JISC), the UK Higher Education Academy (HEA), the Energy Institute (EI), Shell International & British Petroleum (BP).  Her industry-academic research is with multinational companies, most notably Royal Dutch Shell, for whom she was Senior Researcher 2008-2010. Email: allison.littlejohn@open.ac.uk