Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

When designing learning analytics tools for use by learners we
have an opportunity to provide tools that consider a particular
learner's situation and the learner herself. To afford actual impact
on learning, such tools have to be informed by theories of education.
Particularly, educational research shows that individual differences
play a significant role in explaining students' learning
process. However, limited empirical research in learning analytics
has investigated the role of theoretical constructs, such as motivational factors, that are underlying the observed differences between individuals. In this work, we conducted a field experiment
to examine the effect of three designed learning analytics visualizations on students' participation in online discussions in authentic course settings. Using hierarchical linear mixed models, our
results revealed that effects of visualizations on the quantity and
quality of messages posted by students with differences in
achievement goal orientations could either be positive or negative.
Our findings highlight the methodological importance of considering
individual differences and pose important implications for
future design and research of learning analytics visualizations.

Citation: Sanam Shirazi Beheshtiha, Marek Hatala, Dragan Gašević and Srećko Joksimović (2016). "The Role of Achievement Goal Orientations When Studying Effect of Learning Analytics Visualizations". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

This paper describes the analytical section of Lear-B (Learning Biosis), a first research prototype to address some of the challenges inherent to workplace learning, such as its informal aspect, the need to support workers in their self-regulatory learning (SRL) processes, the importance of collective and social sharing of the workplace knowledge and its "on-demand" and contextual characteristics.

The authors stress in particular the role of the workplace community in the development of knowledge and the necessity of each individual to develop a correct and useful Self-Regulatory Learning (SRL) approach. These aspects highlights some of the characteristics that a support IT tool should provide, such as:

  • Collection of learning–related contributions and their re-aggregation, analysis to create further new knowledge, as well as sharing results to users.
  • Identification of learning needs for individuals and self-setting of personal learning goals.
  • Monitoring and comparison of users learning progress.
  • Share of users learning experiences.
  • Necessity to integrate data from different tools and services that are used by workers in their everyday working.

In this prototype, Learning Analytics plays an important role: according to the authors, Learning Analytics "allows for the organization to better align its learning objectives with those of its employees by knowing about their learning practic-es; it supports users’ SRL processes by providing them with the necessary input from the social context of the workplace; and it enhances the motivation of individuals to take part in learning and knowledge building activities and sharing their experiences by providing them with feedback from the collective."

The paper describes then the internal environment of Lear-B (which integrates also wiki, social networking and bookmarking functionalities), indicating how the analysis of the platform is albe to enhance knowledge tracking, workers engagement and individual and collective learning improvements.

Citation: M. Siadaty, D. Gašević, J. Jovanović, N. Milikić, Z. Jeremić, L. Ali, A. Giljanović, M. Hatala. Learn-B: A Social Analytics-enabled Tool for Self-regulated Workplace Learning | Url:

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

Learning analytics makes extensive use of trace data from learners interacting with Learning Management Systems (LMS), and one of the most common uses is to derive an estimate of the time each learner spent on task, that is, engaging in particular learning activities. The authors of this paper note that while this approach is widely used, the details are often not given, and the consequences of choices made in generating these measures are not explored.

They present two experiments exploring different measures of time on task, one using data from a fully-online course at a Canadian university, and another using data from a blended course at an Australian university.

They found that time-on-task measures typically outperformed count measures for predicting student performance, but more importantly, identified that the precise choice of time-on-task measure "can have a significant effect on the overall fit of the model, its significance, and eventually on the interpretation of research findings".

Given their findings, they argue that there are implications for research in learning analytics: "Above all is the need for more caution when using time-on-task measures for building learning analytics models. Given that details of time-on-task estimation can potentially impact reported research findings, appropriately addressing time-on-task estimation becomes a critical part of standard research practice in the learning analytics community. This is particularly true in cases where time-on-task measures are not accompanied by additional measures such as counts of relevant activities."

They call for researchers to recognise the importance of time-on-task measures, for fuller detail to be given of the measures used, and for further investigation.

(This paper is an expanded version of an earlier paper at LAK15, which reported only on the data from the Canadian university: Kovanovic, Vitomir, Gasevic, Dragan, Dawson, Shane, Joksimovic, Srecko, Baker, Ryan S, & Hatala, Marek. (2015). Penetrating the black box of time-on-task estimation. Paper presented at LAK '15, Poughkeepsie, NY. DOI 10.1145/2723576.2723623.)

Citation: Kovanović, V., Gašević, D., Dawson, S., Joksimović, S., Baker, R. S., & Hatala, M. (2015) Does Time-on-task Estimation Matter? Implications on Validity of Learning Analytics Findings. | Url:

Type: Evidence | Proposition: B: Teaching | Polarity: | Sector: | Country:

The paper describes a procedure evaluation/e-training tool (named PeT) for the oil and gas industry, related to the tracking of knowledge and confidence of trainees in emergency operating procedures.

The issue that this kind of tools try to solve is the fact that one of the main responsible for incidents in the oil & gas industry is the lacking of knowledge, both during standard and emergency situations. This lack of training can have serious consequences over the safety of workforce, such as losses of life and serious injuries. But incidents in oil & gas platforms can also lead to severe issues on the productivity of the plant, such as operations downtime, heavy costs and loss of reputation (e.g. the oil spill in the Gulf of Mexico on 2010).

PeT is a training and testing environment for both standard (SOP) and emergency operating procedures (EOP), implementing multiple-choice knowledge tests for two emergency procedures. The main objectives of PeT are:

  • Verify the knowledge of the standard procedures, with the aim to avoid incidents.
  • Ensure that workforce takes appropriate decisions in an emergency setting.
  • Create a competence portfolio for each operator while taking into account his/her past experience and expertise.
  • Provide remedial instructions and feedback to address gaps in the knowledge and competences of operators in the execution of both critical and non-critical tasks.


Concerning Learning Analytics, PeT tracks three kinds of data:

  • Session data, related to the time of completion of the overall knowledge verification session, ID of the employee and code of the operating procedure undertaken.
  • Question data, related to the time an user has spent in each single step of the procedure.
  • Choice data, related to the correctness of answers.


Moreover, the study has introduced an overall confidence metric (for the measurement of the ability of an operator to execute a task, both standard and emergency operating ones, correctly in the proper timeline with an optimal mindset) and a concept of criticality of each step.

The PeT tool has been tested and improved in two separate sessions in an oil & gas company in Canada in 2014. The experiment was conducted with several operators with different backgrounds and different levels of expertise.

Results show that, on the analysed group of workers, PeT is capable to efficiently assess the knowledge and behaviour of workforce in the oil and natural gas industry, in both standard and emergency procedures, ensuring traceable training for operators and leading to greater safety for workforce together with less risk of productivity losses for the plant.

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

Evidence of the month: August 2015

This meta-analysis looked at studies that compare learning outcomes for learners using Intelligent Tutoring Systems (ITS) with learners using other learning opportunities. They found 107 studies. They found:

ITS were associated with greater achievement when compared to teacher-led group instruction, other sorts of computer instruction and textbooks.

There was no significant difference in outcomes between ITS and small-group or individualised human tutors.

The positive findings applied across all levels of education, and regardless of whether ITS were used as the main means of teaching, a supplement, part of a teacher-led strategy, or as a homework aid.

This is not a new finding: previous work has come to broadly similar conclusions. But it is a recent and large in scope.

Citation: Ma, W., Adesope, O. O., Nesbit, J. C., & Liu, Q. (2014). Intelligent tutoring systems and learning outcomes: A meta-analysis. Journal of Educational Psychology, 106(4), 901. | Url:

Type: Evidence | Proposition: C: Uptake | Polarity: | Sector: | Country:

There is a large body of research suggesting that the amount of time spent on learning can improve the quality of learning, as represented by academic performance. The widespread adoption of learning technologies such as learning management systems (LMSs) has resulted in large amounts of data about student learning being readily accessible to educational researchers. One common use of this data is to measure time that students have spent on different learning tasks. Given that LMS systems typically only capture times when students executed various actions, time-on-task measures can currently only be estimates.

This paper takes five learning analytics models of student performance, and examines the consequences of using 15 different time-on-task estimation strategies. It finds that choice of estimation strategy can have a significant effect on the overall fit of a model, its significance, and the interpretation of research findings.

The paper concludes

  1. The learning analytics community should recognize the importance of time-on-task estimation and the role it plays in the quality of analytical models and their interpretation
  2. Publications should explain in detail how time on task has been estimated, in order to support the development of open, replicable and reproducible research
  3. This area should be investigated further in order to provide a set of standards and common practices for the conduct of learning analytics research.

This was selected as the best paper at the Learning Analytics and Knowledge conference 2015.

Citation: Kovanovic, Vitomir, Gasevic, Dragan, Dawson, Shane, Joksimovic, Srecko, Baker, Ryan S, & Hatala, Marek. (2015). Penetrating the black box of time-on-task estimation. Paper presented at LAK '15, Poughkeepsie, NY. DOI 10.1145/2723576.2723623 | Url:

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

This paper describes an application of learning analytics that builds on an existing research programme investigating how students contribute and attend to the messages of others in asynchronous online discussions. The implementation was conducted in a semester‐long blended graduate seminar on educational
technology consisting of nine first‐ or second‐year doctoral students and one instructor.

Two kinds of analytic were designed: one set provided students with real‐time information on their activity; others were presented to students in a separate digital space for reflection.

Findings from an initial implementation of the application indicate that the learning analytics intervention supported changes in students’ discussion participation. Many students used the analytics to set, enact, and reflect on goals for their discussion forum activity. The most common change that students made after their introduction related to the percentage of their peers’ posts that they read.

The authors identify five issues for future work on learning analytics in online discussions

  1. Analytics may prompt unintentional as well as purposeful change
  2. The same analytic will prompt a variety of learner responses
  3. Learners need to understand why the analytics are relevant to their learning and how they are calculated
  4. The affective components of students’ reactions need to be taken into account
  5. Students need support in the process of enacting analytics‐driven change.
Citation: Friend Wise, Alyssa, Zhao, Yuting, & Hausknecht, Simone Nicole. (2014). Learning analtics for online discussions: embedded and extracted approaches. Journal of Learning Analytics, 1(2), 48-71. | Url: