Tag Archives: virtual learning environment

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

Learning analytics makes extensive use of trace data from learners interacting with Learning Management Systems (LMS), and one of the most common uses is to derive an estimate of the time each learner spent on task, that is, engaging in particular learning activities. The authors of this paper note that while this approach is widely used, the details are often not given, and the consequences of choices made in generating these measures are not explored.

They present two experiments exploring different measures of time on task, one using data from a fully-online course at a Canadian university, and another using data from a blended course at an Australian university.

They found that time-on-task measures typically outperformed count measures for predicting student performance, but more importantly, identified that the precise choice of time-on-task measure "can have a significant effect on the overall fit of the model, its significance, and eventually on the interpretation of research findings".

Given their findings, they argue that there are implications for research in learning analytics: "Above all is the need for more caution when using time-on-task measures for building learning analytics models. Given that details of time-on-task estimation can potentially impact reported research findings, appropriately addressing time-on-task estimation becomes a critical part of standard research practice in the learning analytics community. This is particularly true in cases where time-on-task measures are not accompanied by additional measures such as counts of relevant activities."

They call for researchers to recognise the importance of time-on-task measures, for fuller detail to be given of the measures used, and for further investigation.

(This paper is an expanded version of an earlier paper at LAK15, which reported only on the data from the Canadian university: Kovanovic, Vitomir, Gasevic, Dragan, Dawson, Shane, Joksimovic, Srecko, Baker, Ryan S, & Hatala, Marek. (2015). Penetrating the black box of time-on-task estimation. Paper presented at LAK '15, Poughkeepsie, NY. DOI 10.1145/2723576.2723623.)

Citation: Kovanović, V., Gašević, D., Dawson, S., Joksimović, S., Baker, R. S., & Hatala, M. (2015) Does Time-on-task Estimation Matter? Implications on Validity of Learning Analytics Findings. | Url: https://epress.lib.uts.edu.au/journals/index.php/JLA/article/view/4501

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

This study aims to develop a recommender system for social learning platforms that combine traditional learning management systems with commercial social networks like Facebook. We therefore take into account social interactions of users to make recommendations on learning resources. We propose to make use of graph-walking methods for improving performance of the well-known baseline algorithms. We evaluate the proposed graph-based approach in terms of their F1 score, which is an effective combination of precision and recall as two fundamental metrics used in recommender systems area. The results show that the graph-based approach can help to improve performance of the baseline recommenders; particularly for rather sparse educational datasets used in this study.

Screen Shot 2015-12-04 at 12.41.06 Screen Shot 2015-12-04 at 12.41.20 Screen Shot 2015-12-04 at 12.41.32 Screen Shot 2015-12-04 at 12.41.46

Citation: Fazeli, S., Loni, B., Drachsler, H., Sloep, P. (2014, 16-19 September). Which recommender system can best fit social learning platforms? In C. Rensing, S. de Freitas, T. Ley, & P. Muñoz-Merino (Eds.), Open Learning and Teaching in Educational Communities. Proceedings of the 9th European Conference on Technology Enhanced Learning (EC-TEL2014), Lecture Notes in Computer Science 8719 (pp. 84-97). Graz, Austria: Springer International Publishing. | Url: http://hdl.handle.net/1820/5685

Type: Evidence | Proposition: C: Uptake | Polarity: | Sector: | Country:

There is a large body of research suggesting that the amount of time spent on learning can improve the quality of learning, as represented by academic performance. The widespread adoption of learning technologies such as learning management systems (LMSs) has resulted in large amounts of data about student learning being readily accessible to educational researchers. One common use of this data is to measure time that students have spent on different learning tasks. Given that LMS systems typically only capture times when students executed various actions, time-on-task measures can currently only be estimates.

This paper takes five learning analytics models of student performance, and examines the consequences of using 15 different time-on-task estimation strategies. It finds that choice of estimation strategy can have a significant effect on the overall fit of a model, its significance, and the interpretation of research findings.

The paper concludes

  1. The learning analytics community should recognize the importance of time-on-task estimation and the role it plays in the quality of analytical models and their interpretation
  2. Publications should explain in detail how time on task has been estimated, in order to support the development of open, replicable and reproducible research
  3. This area should be investigated further in order to provide a set of standards and common practices for the conduct of learning analytics research.

This was selected as the best paper at the Learning Analytics and Knowledge conference 2015.

Citation: Kovanovic, Vitomir, Gasevic, Dragan, Dawson, Shane, Joksimovic, Srecko, Baker, Ryan S, & Hatala, Marek. (2015). Penetrating the black box of time-on-task estimation. Paper presented at LAK '15, Poughkeepsie, NY. DOI 10.1145/2723576.2723623 | Url: http://dl.acm.org/citation.cfm?id=2723576

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

1-s2.0-S1096751611000704-main

This empirical study investigates students' learning choices for mathematics and statistics in a blended learning environment, composed of both online and face-to-face learning components. The students (N = 730) were university freshmen with a strong diversity in prior schooling and a wide range of proficiency in quantitative subjects. In this context, we investigated the impact that individual differences in achievement emotions (enjoyment, anxiety, boredom, hopelessness) had on students' learning choices, in terms of the intensity of using the online learning mode versus the face-to-face mode. Unlike the general level of learning activities, which is only minimally influenced by achievement emotions, these emotions appear to have a moderately strong effect on a student's preference for online learning. Following this, we explored the antecedents of achievement emotions. Through the use of path-modeling, we conclude that while goal setting behavior only marginally impacts achievement emotions, effort views—a crucial component of the social-cognitive model of implicit theories of intelligence—have a substantial impact on achievement emotions.

Citation: Tempelaar, D. T., Niculescu, A., Rienties, B., Giesbers, B., & Gijselaers, W. H. (2012). How achievement emotions impact students' decisions for online learning, and what precedes those emotions. Internet and Higher Education, 15 (3), 161-169. | Url: http://www.sciencedirect.com/science/article/pii/S1096751611000704

Type: Evidence | Proposition: B: Teaching | Polarity: | Sector: | Country:

This paper focuses on work conducted at The Open University (OU), one of the world’s largest distance learning institutions, into predicting students who are at risk of failing a module. Since tutors at the university do not interact face to face with students, it can be difficult for to identify and respond to students who are struggling in time to resolve the difficulty. Predictive models were developed and tested using historic Virtual Learning Environment (VLE) activity data, combined with other data sources, for three OU modules. These revealed that it is possible to predict student failure by looking for changes in user activity in the VLE. More focused analysis of these modules yielded some promising results for the creation of accurate hypotheses about students who fail.

Comment by Martyn Cooper: This paper reports an investigation in online tutoring of the relation between learning analytic metrics and retention. It highlights the important result that little can be deduced from absolute levels of interaction but changes in level of interaction are significant. Although a limited study in the context of one VLE and a few modules it is likely to be a paper with wider significance.

Citation: A. Wolff, Z. Zdrahal, A. Nikolov and M. Pantucek, (2013). Improving retention: predicting at-risk students by analysing clicking behaviour in a virtual learning environment, In: Third International Learning Analytics and Knowledge Conference (LAK13), 8-12 April, Leuven, Belgium. | Url: http://dl.acm.org/citation.cfm?id=2460296