Archives

Type: Evidence | Proposition: B: Teaching | Polarity: | Sector: | Country:

In recent years, Massive Open Online Courses (MOOCs) have become a phenomenon offering the possibility to teach thousands of participants simultaneously. In the same time the platforms used to deliver these courses are still in their fledgling stages. While course content and didactics of those massive courses are the primary key factors for the success of courses, still an smart platform may increase or decrease the learners experience and his learning outcome. This paper at hand proposes the usage of an A/B testing framework that is able to be used within an microservice architecture to validate hypotheses about how learners use the platform and to enable data-driven decisions about new features and settings. To evaluate this framework three new features (Onboarding Tour, Reminder Mails and a Pinboard Digest) have been identified based on a user survey. They have been implemented and introduced on two large MOOC platforms and their influence on the learners behavior have been measured. Finally this paper proposes a data driven decision workflow for the introduction of new features and settings on e-learning platforms.

Citation: Jan Renz, Daniel Hoffmann, Thomas Staubitz and Christoph Meinel (2016). "Using A/B Testing in MOOC Environments". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: B: Teaching | Polarity: | Sector: | Country:

We study how the use of online learning systems stimulates cognitive activities, by conducting an experiment with the use of eye tracking technology to monitor eye fixations of 60 final year students engaging in online interactive tutorials at the start of their Final Year Project module. Our findings show that the students' learning behaviours fall into three different types of eye fixation patterns, and the data corresponding to the different types of learners relates to the performance of the students in other related academic modules. We conclude that this method of studying eye fixation patterns can identify different types of learners with respect to their cognitive capability and academic potential, and also allow educators to understand how their instructional design and online learning environment can stimulate higher-order cognitive activities.

Citation: The Benedict & Manolis Mavrikis (2016). "A Study On Eye Fixation Patterns of Students in Higher Education Using an Online Learning System". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

Statway is one of the Community College Pathways initiatives designed to promote students' success in their developmental math sequence and reduce the time required to earn college credit. A recent causal analysis confirmed that Statway dramatically increased students' success rates in half the time across two different cohorts. These impressive results also obtain results across the gender and race/ethnicity groups. However, there is still room for improvement. Students who did not succeed overall in Statway often did not complete the first of the two-course sequence. Therefore, the objective of this study is to formulate a series of indicators from self-report and online learning system data, alerting instructors to students' progress during the first weeks of the first course in the Statway sequence.

Citation: Ouajdi Manai, Hiroyuki Yamada and Christopher Thorn (2016). "Real-­time indicators and targeted supports: Using online platform data to accelerate student learning". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

Student mistakes are often not random but, rather, reflect thoughtful yet incorrect strategies. In order for educational technologies to make full use of students' performance data to estimate the knowledge of a student, it is important to model not only the conceptions but also the misconceptions that a student's particular pattern of successes and errors may indicate. The student models that drive the 'outer loop' of Intelligent Tutoring Systems typically only track positive skills and conceptions, not misconceptions. Here, we present a method of representing misconceptions in the kinds of Knowledge Component models, or Q-Matrices, that are used by student models to estimate latent knowledge. We show, in a case study on a fraction arithmetic dataset, that incorporating a misconception into the Knowledge Component model dramatically improves model fit. We also derive qualitative insights from comparing predicted learning curves across models that incorporate varying misconception-related parameters. Finally, we show that the inclusion of a misconception in a Knowledge Component model can yield individual student estimates of misconception strength. These estimates are significantly correlated with out-of-tutor individual measures of student errors indicative of the misconception.

Citation: Ran Liu, Rony Patel and Kenneth R. Koedinger (2016). "Modeling Common Misconceptions in Learning Process Data". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

With the aim of better scaffolding discussion to improve learning in a MOOC context, this work investigates what kinds of discussion behaviors contribute to learning. We explored whether engaging in higher-order thinking behaviors results in more learning than paying general or focused attention to course materials. In order to evaluate whether to attribute the effect to engagement in the associated behaviors versus persistent characteristics of the students, we adopted two approaches. First, we used propensity score matching to pair students who exhibit a similar level of involvement in other course activities. Second, we explored individual variation in engagement in higher-order thinking behaviors across weeks. The results of both analyses support the attribution of the effect to the behavioral interpretation. A further analysis using LDA applied to course materials suggests that more social oriented topics triggered richer discussion than more biopsychology oriented topics.

Citation: Xu Wang, Miaomiao Wen and Carolyn Rose (2016). "Towards triggering higher-order thinking behaviors in MOOCs". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

The 'doer effect' is an association between the number of online interactive practice activities students' do and their learning outcomes that is not only statistically reliable but has much higher positive effects than other learning resources, such as watching videos or reading text. Such an association suggests a causal interpretation--more doing yields better learning--which requires randomized experimentation to most rigorously confirm. But such experiments are expensive, and any single experiment in a particular course context does not provide rigorous evidence that the causal link will generalize to other course content. We suggest that analytics of increasingly available online learning data sets can complement experimental efforts by facilitating more widespread evaluation of the generalizability of claims about what learning methods produce better student learning outcomes. We illustrate with analytics that narrow in on a causal interpretation of the doer effect by showing that doing within a course unit predicts learning of that unit content more than doing in units before or after. We also provide generalizability evidence across four different courses involving over 12,500 students that the learning effect of doing is about six times greater than that of reading.

Citation: Kenneth R. Koedinger, Elizabeth A. McLaughlin, Julianna Zhuxin Jia and Norman L. Bier (2016). "Is the Doer Effect a Causal Relationship? How Can We Tell and Why It's Important". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

This paper describes the development and evaluation of an affect-aware intelligent support component that is part of a learning environment known as iTalk2Learn. The intelligent support component is able to tailor feedback according to a student's affective state, which is deduced both from speech and interaction. The affect prediction is used to determine which type of feedback is provided and how that feedback is presented (interruptive or non-interruptive). The system includes two Bayesian networks that were trained with data gathered in a series of ecologically-valid Wizard-of-Oz studies, where the effect of the type of feedback and the presentation of feedback on students' affective states was investigated. This paper reports results from an experiment that compared a version that provided affect-aware feedback (affect condition) with one that provided feedback based on performance only (non-affect condition). Results show that students who were in the affect condition were less off-task, a result that was statistically significant. Additionally, the results indicate that students in the affect condition were less bored. The results also show that students in both conditions made learning gains that were statistically significant, while students in the affect condition had higher learning gains than those in the non-affect condition, although this result was not statistically significant in this study's sample. Taken all together, the results point to the potential and positive impact of affect-aware intelligent support.

Citation: Beate Grawemeyer, Manolis Mavrikis, Wayne Holmes, Sergio Gutierrez-Santos, Michael Wiedmann and Nikol Rummel (2016). "Affecting Off-Task Behaviour: How Affect-aware Feedback Can Improve Student Learning". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

Many pedagogical models in the field of learning analytics are implicit and do not overtly direct learner behavior. While this allows flexibility of use, this could also result in misaligned practice, and there are calls for more explicit pedagogical models in learning analytics. This paper presents an explicit pedagogical model, the Team and Self Diagnostic Learning (TSDL) framework, in the context of collaborative inquiry tasks. Key informing theories include experiential learning, collaborative learning, and the learning analytics process model. The framework was trialed through a teamwork competency awareness program for 14 year old students. A total of 272 students participated in the program. This paper foregrounds students' and teachers' evaluative accounts of the program. Findings reveal positive perceptions of the stages of the TSDL framework, despite identified challenges, which points to its potential usefulness for teaching and learning. The TSDL framework aims to provide theoretical clarity of the learning process, and foster alignment between learning analytics and the learning design. The current work provides trial outcomes of a teamwork competency awareness program that used dispositional analytics, and further efforts are underway to develop the discourse layer of the analytic engine. Future work will also be dedicated to application and refinement of the framework for other contexts and participants, both learners and teachers alike.

Citation: Elizabeth Koh, Antonette Shibani, Jennifer Pei-Ling Tan and Helen Hong (2016). "A Pedagogical Framework for Learning Analytics in Collaborative Inquiry Tasks: An Example from a Teamwork Competency Awareness Program". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

The adequate emotional state of students has proved to be essential for favoring learning. This paper explores the possibility of obtaining students' feedback about the emotions they feel in class in order to discover potential emotion patterns that might indicate learning fails. This paper presents a visual dashboard that allows students to track their emotions and follow up on their evolution during the course. We have compiled the principal classroom related emotions and developed a two-phase inquiry process to: verify the possibility to measure students' emotions in classroom; discover how emotions can be displayed to promote self-reflection; and confirm the impact of emotions on learning performance. Our results suggest that students' emotions in class are related to evaluation marks. This shows that early information about students' emotions can be useful for teachers and students to improve classroom results and learning outcomes.

Citation: Samara Ruiz, Sven Charleer, Maite Urretavizcaya, Joris Klerkx, Isabel Fernandez-Castro and Erik Duval (2016). "Supporting learning by considering emotions: Tracking and Visualization. A case study". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

Despite the prevalence of e-learning systems in schools, most of today's systems do not personalize educational data to the individual needs of each student. Although much progress has been made in modeling students' learning from data and predicting performance, these models have not been applied in real classrooms. This paper proposes a new algorithm for sequencing questions to students that is empirically shown to lead to better performance and engagement in real schools when compared to a baseline approach. It is based on using knowledge tracing to model students' skill acquisition over time, and to select questions that advance the student's learning within the range of the student's capabilities, as determined by the model. The algorithm is based on a Bayesian Knowledge Tracing (BKT) model that incorporates partial credit scores, reasoning about multiple attempts to solve problems, and integrating item difficulty. This model is shown to outperform other BKT models that do not reason about (or reason about some but not all) of these features. The model was incorporated into a sequencing algorithm and deployed in two schools where it was compared to a baseline sequencing algorithm that was designed by pedagogical experts. In both schools, students using the BKT sequencing approach solved more difficult questions, and with better performance than did students who used the expert-based approach. Students were also more engaged using the BKT approach, as determined by their log-ins in the system and a questionnaire. We expect our approach to inform the design of better methods for sequencing and personalizing educational content to students that will meet their individual learning needs.

Citation: Yossi Ben-David, Avi Segal and Kobi Gal (2016). "Sequencing Educational Content in Classrooms using Bayesian Knowledge Tracing". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.