Tag Archives: Learning analytics

Type: Evidence | Proposition: B: Teaching | Polarity: | Sector: | Country:

The paper describes a procedure evaluation/e-training tool (named PeT) for the oil and gas industry, related to the tracking of knowledge and confidence of trainees in emergency operating procedures.

The issue that this kind of tools try to solve is the fact that one of the main responsible for incidents in the oil & gas industry is the lacking of knowledge, both during standard and emergency situations. This lack of training can have serious consequences over the safety of workforce, such as losses of life and serious injuries. But incidents in oil & gas platforms can also lead to severe issues on the productivity of the plant, such as operations downtime, heavy costs and loss of reputation (e.g. the oil spill in the Gulf of Mexico on 2010).

PeT is a training and testing environment for both standard (SOP) and emergency operating procedures (EOP), implementing multiple-choice knowledge tests for two emergency procedures. The main objectives of PeT are:

  • Verify the knowledge of the standard procedures, with the aim to avoid incidents.
  • Ensure that workforce takes appropriate decisions in an emergency setting.
  • Create a competence portfolio for each operator while taking into account his/her past experience and expertise.
  • Provide remedial instructions and feedback to address gaps in the knowledge and competences of operators in the execution of both critical and non-critical tasks.


Concerning Learning Analytics, PeT tracks three kinds of data:

  • Session data, related to the time of completion of the overall knowledge verification session, ID of the employee and code of the operating procedure undertaken.
  • Question data, related to the time an user has spent in each single step of the procedure.
  • Choice data, related to the correctness of answers.


Moreover, the study has introduced an overall confidence metric (for the measurement of the ability of an operator to execute a task, both standard and emergency operating ones, correctly in the proper timeline with an optimal mindset) and a concept of criticality of each step.

The PeT tool has been tested and improved in two separate sessions in an oil & gas company in Canada in 2014. The experiment was conducted with several operators with different backgrounds and different levels of expertise.

Results show that, on the analysed group of workers, PeT is capable to efficiently assess the knowledge and behaviour of workforce in the oil and natural gas industry, in both standard and emergency procedures, ensuring traceable training for operators and leading to greater safety for workforce together with less risk of productivity losses for the plant.

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

This study aims to develop a recommender system for social learning platforms that combine traditional learning management systems with commercial social networks like Facebook. We therefore take into account social interactions of users to make recommendations on learning resources. We propose to make use of graph-walking methods for improving performance of the well-known baseline algorithms. We evaluate the proposed graph-based approach in terms of their F1 score, which is an effective combination of precision and recall as two fundamental metrics used in recommender systems area. The results show that the graph-based approach can help to improve performance of the baseline recommenders; particularly for rather sparse educational datasets used in this study.

Screen Shot 2015-12-04 at 12.41.06 Screen Shot 2015-12-04 at 12.41.20 Screen Shot 2015-12-04 at 12.41.32 Screen Shot 2015-12-04 at 12.41.46

Citation: Fazeli, S., Loni, B., Drachsler, H., Sloep, P. (2014, 16-19 September). Which recommender system can best fit social learning platforms? In C. Rensing, S. de Freitas, T. Ley, & P. Muñoz-Merino (Eds.), Open Learning and Teaching in Educational Communities. Proceedings of the 9th European Conference on Technology Enhanced Learning (EC-TEL2014), Lecture Notes in Computer Science 8719 (pp. 84-97). Graz, Austria: Springer International Publishing. | Url: http://hdl.handle.net/1820/5685

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

This longitudinal study explores the effects of tracking and monitoring time devoted to learn with a mobile tool, on self-regulated learning. Graduate students (n = 36) from three different online courses used their own mobile devices to track how much time they devoted to learn over a period of four months. Repeated measures of the Online Self-Regulated Learning Questionnaire and Validity and Reliability of Time Management Questionnaire were taken along the course. Our findings reveal positive effects of tracking time on time management skills. Variations in the channel, content and timing of the mobile notifications to foster reflective practice are investigated, and time-logging patterns are described. These results not only provide evidence of the benefits of recording learning time, but also suggest relevant cues on how mobile notifications should be designed and prompted towards self-regulated learning of students in online courses.

Screen Shot 2015-12-04 at 12.03.29Screen Shot 2015-12-04 at 12.03.49 Screen Shot 2015-12-04 at 12.04.04 Screen Shot 2015-12-04 at 12.04.23

Citation: Tabuenca, B., Kalz, M., Drachsler, H., & Specht, M. (2015). Time will tell: The role of mobile learning analytics in self-regulated learning. Computers & Education, 89, 53–74. | Url: http://dspace.ou.nl/handle/1820/6172

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

This evidence was extracted from a paper coming from the 15th International Conference on Knowledge Technologies and Data-driven Business (i-KNOW 2015), held in Graz (Austria) last 21st – 22nd October 2015.

In this paper, authors describe a framework, named Social Semantic Server (SSS), that can constitute a flexible tool for the support of informal learning in different workplace scenarios.

The development of this tool is based on the assumption that “individual knowledge is constructed through collaborative knowledge building […][and that] a knowledge base is co-constructed by a community of learners as a result of their activities mediated by shared artefacts”. This implies that learners community can be considered as a Distributed Cognitive System, and that the process of meaning construction in this environment can be defined as “Meaning Making”.

SSS was developed considering several Design Principles, and among them several learning KPIs can be found, such as tracking of physical, time, social and semantic context of user-artefact and user-user interactions or tracking of history of network interactions. This network, thanks to Learning Analytics, can represent a good source of understanding what kind of information the users are searching for and new trends in the Meaning Making process.

In the second part of the conference paper, several services of SSS were described, namely metadata degrees of formality, tracking of users interaction, search engine, recommendations tool, knowledge structures, Q&A environment, access restrictions and collections and aggregation of learning inputs inside the framework.

The last part of the paper was dedicated to three case studies, which depict how SSS can represent a flexible tool for the generation of informal learning environments at the workplace. Three different IT tool were generated based on some of the SSS services described above, for the informal learning of healthcare professionals (Bits & Pieces), academic researchers (KnowBrain, currently under development) and future teachers training (Attacher). During these case studies, the context of the collected, generated or modified resources were tracked and analysed through dedicated KPIs, which were author, time of collection and the set of attached tags for Attacher, while for B&P and KnowBrain also categories, ratings and discussions were available. As indicated in the paper, “This contextual characteristics can be exploited to create networks of actors and artefacts, as well as to make Learning Analytics”.

Citation: Dennerlein, S., Kowald, D., Lex, E., Theiler, D., Lacic, E. & Ley, T. (2015). The Social Semantic Server: A Flexible Framework to Support Informal Learning at the Workplace. 15th International Conference on Knowledge Technologies and Data-driven Business (i-KNOW 2015), Graz, Austria | Url: http://www.researchgate.net/publication/280920425_The_Social_Semantic_Server_A_Flexible_Framework_to_Support_Informal_Learning_at_the_Workplace

Type: Evidence | Proposition: C: Uptake | Polarity: | Sector: | Country:

The paper deals with two key ideas that could help schools graduate more students on time. The first is to produce a ranked list that orders students according to their risk of not graduating on time. The second is to predict when they'll go off track, to help schools plan the urgency of the interventions. Both of these predictions are useful in identification and prioritization of students at risk and enable schools to target interventions. The eventual goal of these efforts is to focus the limited resources of schools to increase graduation rates.

The results of this study have helped a school district systematically to adjust analytical methods as they continue to build a universal EWI (early-warning indicator) system. The district is also highly interested in the web-based dashboard application that was developed.

Citation: Aguiar, Everaldo, Lakkaraju, Himabindu, Bhanpuri, Nasir, Miller, David, Yuhas, Ben, & Addison, Kecia L. (2015). Who, when, and why: a machine learning approach to prioritizing students at risk of not graduating high school on time. Paper presented at the Proceedings of the Fifth International Conference on Learning Analytics And Knowledge. | Url: http://d-miller.github.io/assets/AguiarEtAl2015.pdf

Type: Evidence | Proposition: C: Uptake | Polarity: | Sector:

This panel at the LAK15 conference brought together researchers from five European countries (Estonia, France, the Netherlands, Spain and the UK) to examine progress in learning analytics from European perspectives. In doing so, it identified the benefits and challenges associated with sharing and developing practice across national boundaries.

Citation: Ferguson, Rebecca, Cooper, Adam, Drachsler, Hendrik, Kismihok, Gabor, Boyer, Anne, Tammets, Kairit, & Martinez Mones, Alejandra. (2015). Learning Analytics: European Perspectives. Paper presented at the LAK16, Poughkeepsie, NY, USA. | Url: http://oro.open.ac.uk/42346/

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

This article explores the feasibility of using student promotions of content, in a blogosphere, to identify quality content, and implications for students and instructors. It shows that students actively and voluntarily promote content, identify quality material with considerable accuracy, and use promotion data to select what to read. Application of the peer promotions tool provides the desired results — the promoted content is of significantly higher quality than content that is not promoted, and content that is repeatedly promoted is of higher quality than content that has fewer promotions. These results have been verified by two different case studies. Other results show that good and poor promoters can be identified. Both classifications of promoters have value: by focusing on good promoters, the reliability of quality assessment can be improved; by focusing on poor promoters, the instructor is in a better position to identify students who may be struggling.

Citation: Gunnarsson, Bjorn Levi, & Alterman, Richard. (2014). Peer promotions as a method to identify quality content. Journal of Learning Analytics, 1(2), 126-150. | Url: http://epress.lib.uts.edu.au/journals/index.php/JLA/issue/archive

Type: Evidence | Proposition: C: Uptake | Polarity: | Sector: | Country:

This paper discusses a scalable approach for integrating learning analytics into an online K–12 science curriculum. A description of the curriculum and the underlying pedagogical framework is followed by a discussion of the challenges to be tackled as part of this integration. The paper includes examples of data visualization based on teacher usage data, along with a methodology for examining an inquiry‐based science programme. The paper uses data from a medium‐sized school district, comprising 53 schools, 1,026 teachers, and nearly one‐third of a million curriculum visits during the 2012–2013 school year. On their own, learning analytics data paint an incomplete picture of what teachers do in their classrooms with the curriculum. Surveys, interviews, and classroom observation can be used to contextualize this data. There are several hurdles that must be overcome to ensure that new data collection and analysis tools fulfill their promise. Schools and districts must address the gap in access to technology. While some schools have achieved one‐to‐one computing, most schools are not even close to this goal, and this has profound implications for their ability to collect reliable analytics data. Conversations with and observations of teachers reveal that teachers and students often share accounts, and that students are limited in the activities they can complete online. This means that analytics data may not prove to be reliable.

Citation: Monroy, Carols, Snodgrass Rangel, Virginia, & Whitaker, Reid. (2014). A strategy for incorporating learning analytics into the design and evaluation of a K-12 science curriculum. Journal of Learning Analytics, 1(2), 94-125. | Url: http://epress.lib.uts.edu.au/journals/index.php/JLA/issue/archive

Type: Evidence | Proposition: D: Ethics | Polarity: | Sector:

There is growing concern about the extent to which individuals are tracked while online. Within higher education, understanding of issues surrounding student attitudes to privacy is influenced not only by the apparent ease with which members of the public seem to share the detail of their lives, but also by the traditionally paternalistic institutional culture of universities.

This paper explores issues around consent and opting in or out of data tracking. It considers how three providers of massive open online courses (Coursera, EdX and FutureLearn) inform users about data usage. It also discusses how higher education institutions can work toward an approach that engages students and informs them in more detail about the implications of learning analytics on their personal data.

The paper restates the need to

  1. develop a coherent approach to consent, taking into account the findings of research into how people make decisions about personal data
  2. recognise that people can only engage selectively in privacy self management
  3. adjust the timing of privacy law to take into account that data may be combined and reanalysed in the future
  4. develop more substantive privacy rules

This paper was nominated for the best paper award at the 2015 Learning Analytics and Knowledge conference.

Citation: Prinsloo, Paul, & Slade, Sharon. (2015). Student privacy self-management: implications for learning analytics. Paper presented at the LAK '15, Poughkeepsie, NY. DOI 10.1145/2723576.2723585 | Url: http://dl.acm.org/citation.cfm?id=2723576

Type: Evidence | Proposition: C: Uptake | Polarity: | Sector: | Country:

There is a large body of research suggesting that the amount of time spent on learning can improve the quality of learning, as represented by academic performance. The widespread adoption of learning technologies such as learning management systems (LMSs) has resulted in large amounts of data about student learning being readily accessible to educational researchers. One common use of this data is to measure time that students have spent on different learning tasks. Given that LMS systems typically only capture times when students executed various actions, time-on-task measures can currently only be estimates.

This paper takes five learning analytics models of student performance, and examines the consequences of using 15 different time-on-task estimation strategies. It finds that choice of estimation strategy can have a significant effect on the overall fit of a model, its significance, and the interpretation of research findings.

The paper concludes

  1. The learning analytics community should recognize the importance of time-on-task estimation and the role it plays in the quality of analytical models and their interpretation
  2. Publications should explain in detail how time on task has been estimated, in order to support the development of open, replicable and reproducible research
  3. This area should be investigated further in order to provide a set of standards and common practices for the conduct of learning analytics research.

This was selected as the best paper at the Learning Analytics and Knowledge conference 2015.

Citation: Kovanovic, Vitomir, Gasevic, Dragan, Dawson, Shane, Joksimovic, Srecko, Baker, Ryan S, & Hatala, Marek. (2015). Penetrating the black box of time-on-task estimation. Paper presented at LAK '15, Poughkeepsie, NY. DOI 10.1145/2723576.2723623 | Url: http://dl.acm.org/citation.cfm?id=2723576