Archives

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

Bayesian Knowledge Tracing (BKT) is one of the most popular knowledge inference models due to its interpretability and ability to infer student knowledge. A proper student modeling can help guide the behavior of a cognitive tutor system and provide insight to researchers on understanding how students learn. Using four different datasets we study the relationship between the error coming from adjusting the parameters and the difficulty index of the skills and the effect of the size of the dataset in this relationship.

Citation: Francesc Martori, Jordi Cuadros and Lucinio González-Sabaté (2016). "Studying the relationship between BKT adjustment error and the skill's difficulty index". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

How can we best align learning analytics practices with disciplinary knowledge practices in order to support student learning? Although learning analytics itself is an interdisciplinary field, it tends to take a ˜one-size-fits-all' approach to the collection, measurement, and reporting of data, overlooking disciplinary knowledge practices. In line with a recent trend in higher education research, this paper considers the contribution of a realist sociology of education to the field of learning analytics, drawing on findings from recent student focus groups at an Australian university. It examines what learners say about their data needs with reference to organizing principles underlying knowledge practices within their disciplines. The key contribution of this paper is a framework that could be used as the basis for aligning the provision and/or use of data in relation to curriculum, pedagogy, and assessment with disciplinary knowledge practices. The framework extends recent research in Legitimation Code Theory, which understands disciplinary differences in terms of the principles that underpin knowledge-building. The preliminary analysis presented here both provides a tool for ensuring a fit between learning analytics practices and disciplinary practices and standards for achievement, and signals disciplinarity as an important consideration in learning analytics practices.

Citation: Jen McPherson, Huong Ly Tong, Scott J. Fatt and Danny Y.T. Liu (2016). "Student perspectives on data provision and use: Starting to unpack disciplinary differences". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

When designing learning analytics tools for use by learners we
have an opportunity to provide tools that consider a particular
learner's situation and the learner herself. To afford actual impact
on learning, such tools have to be informed by theories of education.
Particularly, educational research shows that individual differences
play a significant role in explaining students' learning
process. However, limited empirical research in learning analytics
has investigated the role of theoretical constructs, such as motivational factors, that are underlying the observed differences between individuals. In this work, we conducted a field experiment
to examine the effect of three designed learning analytics visualizations on students' participation in online discussions in authentic course settings. Using hierarchical linear mixed models, our
results revealed that effects of visualizations on the quantity and
quality of messages posted by students with differences in
achievement goal orientations could either be positive or negative.
Our findings highlight the methodological importance of considering
individual differences and pose important implications for
future design and research of learning analytics visualizations.

Citation: Sanam Shirazi Beheshtiha, Marek Hatala, Dragan Gašević and Srećko Joksimović (2016). "The Role of Achievement Goal Orientations When Studying Effect of Learning Analytics Visualizations". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

This article concerns the evaluation of the social capital of the European teachers participating in the eTwinning Portal activities, run by European Schoolnet and characterised by more than 160,000 registered teachers from 35 countries, involved in more than 19,000 projects (2010). This evaluation has been performed by using the Social Network Analysis approach.

The authors found that some correlations can be found "between social network analysis measures like degree and betweenness centrality as well as the local clustering coefficient, activity statistics about usage of eTwinning and the quality management of European Schoolnet".

For the analysis of eTwinning network data, three Learning Analytics tools have been developed:

  • eVa (eTwinning Network Visualization and Analysis), a network visualization and simple analysis tool.
  • CAfe (Competence Analyst for eTwinning), an SNA-based competence management and teachers' self-monitoring tool.
  • AHTC (Ad Hoc Transient Communities) services, which involve users into question-answer activities on the eTwinning Portal.
Citation: M.C. Pham, Y. Cao, Z. Petrushyna, R. Klamma. "Learning Analytics in a Teachers' Social Network" (2012). Proceedings of the 8th International Conference on Networked Learning 2012, ISBN 978-1-86220-283-2 | Url: http://www.lancaster.ac.uk/fss/organisations/netlc/past/nlc2012/abstracts/pdf/pham.pdf

Type: Evidence | Proposition: A: Learning | Polarity: | Sector:

In this paper, the authors describe very well the issues and the opportunities of management of informal learning at workplace based on technology. Contents of this paper are derived by the experience of TRAILER project.

Authors identify informal learning as the most prominent way of learning at workplace (more than 70% of the total learning), and this is why scientific community and policy makers are trying to explore new approaches and technologies to formalize it.

However, "tracking" informal learning with technology raises some issues:

  • Systems can be rejected by users.
  • Difficulty in promoting the adoption of the approach in day-to-day professional activities.
  • The use of a catalogue of competences for informal learning classification can be seen as a major barrier.

These issues can be partially mitigated by using a more pragmatic approach, in particular by integrating systems with the applications which already formed part of the users working environment.

The authors discuss about the definitions of formal, non-formal and informal learning, suggesting that the distinction among them is more related to the presence or absence of a management process.

The introduction of this management process allows to reach two conditions:

  • The validation of informal learning ensures that individuals are not only assessed on their formal qualifications, but also given credit for the whole range of learning which they have achieved in their lives.
  • Whatever the intentions of their designers may be, competence based systems and systems for the validation of informal learning both inevitably extend the reach of educational management into areas to which it did not previously have access.

Hwever, according to authors and TRAILER project experience, "a high priority in supporting informal learning should be the avoidance of additional management processes".

In this scenario, Data and Learning Analytics could represent a solution, considering that "if it were possible to simply monitor people’s activities and deduce their capabilities from this, then the need to validate items of informal learning might disappear. There are increasing signs that this may be possible".

In the second part, authors describe the structure and functioning of a prototype of a Learning Analytics-based system for informal learning tracking.

Citation: F.J. García-Peñalvo, D. Griffiths, M. Johnson, P. Sharples, D. Sherlock. "Problems and opportunities in the use of technology to manage informal learning". 2014, Proceedings of the Second International Conference on Technological Ecosystems for Enhancing Multiculturality (TEEM '14), pp. 573-580. DOI=10.1145/2669711.2669958 http://doi.acm.org/10.1145/2669711.2669958 | Url: http://dl.acm.org/citation.cfm?id=2669958

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

Learning analytics makes extensive use of trace data from learners interacting with Learning Management Systems (LMS), and one of the most common uses is to derive an estimate of the time each learner spent on task, that is, engaging in particular learning activities. The authors of this paper note that while this approach is widely used, the details are often not given, and the consequences of choices made in generating these measures are not explored.

They present two experiments exploring different measures of time on task, one using data from a fully-online course at a Canadian university, and another using data from a blended course at an Australian university.

They found that time-on-task measures typically outperformed count measures for predicting student performance, but more importantly, identified that the precise choice of time-on-task measure "can have a significant effect on the overall fit of the model, its significance, and eventually on the interpretation of research findings".

Given their findings, they argue that there are implications for research in learning analytics: "Above all is the need for more caution when using time-on-task measures for building learning analytics models. Given that details of time-on-task estimation can potentially impact reported research findings, appropriately addressing time-on-task estimation becomes a critical part of standard research practice in the learning analytics community. This is particularly true in cases where time-on-task measures are not accompanied by additional measures such as counts of relevant activities."

They call for researchers to recognise the importance of time-on-task measures, for fuller detail to be given of the measures used, and for further investigation.

(This paper is an expanded version of an earlier paper at LAK15, which reported only on the data from the Canadian university: Kovanovic, Vitomir, Gasevic, Dragan, Dawson, Shane, Joksimovic, Srecko, Baker, Ryan S, & Hatala, Marek. (2015). Penetrating the black box of time-on-task estimation. Paper presented at LAK '15, Poughkeepsie, NY. DOI 10.1145/2723576.2723623.)

Citation: Kovanović, V., Gašević, D., Dawson, S., Joksimović, S., Baker, R. S., & Hatala, M. (2015) Does Time-on-task Estimation Matter? Implications on Validity of Learning Analytics Findings. | Url: https://epress.lib.uts.edu.au/journals/index.php/JLA/article/view/4501

Type: Evidence | Proposition: A: Learning | Polarity: | Sector:

This paper focuses on the necessity of theories and empirical data for the development of Learning Analytics in all the educational fields.

In a dedicated section, authors describe the issues that Learning Analytics are facing when it is applied to workplace learning, stating that "the current focus has been on formal education settings".

One of the main issues highlighted by authors is that "workplace learning is contextually embedded into the workplace tasks and most commonly occurs through informal opportunities", which are hardly catched and monitored by Learning Analytics.

Authors stated also two important differencec between workplace learning and formal learning:

  1. Upskilling and lifelong learning in the workforce are not necessarily associated to formal qualifications.
  2. Just-in-time learning that occurs during daily work is considerabily different from the formal learning sessions that characterized formal learning.
Citation: S. Dawson, N. Mirriahi, D. Gašević. (2015). Importance of theory in learning analytics in formal and workplace settings. Journal of Learning Analytics, 2(2), 1–4. http://dx.doi.org/10.18608/jla.2015.22.1 | Url: https://epress.lib.uts.edu.au/journals/index.php/JLA/article/view/4733

Type: Evidence | Proposition: C: Uptake | Polarity: | Sector: | Country:

This paper discusses a scalable approach for integrating learning analytics into an online K–12 science curriculum. A description of the curriculum and the underlying pedagogical framework is followed by a discussion of the challenges to be tackled as part of this integration. The paper includes examples of data visualization based on teacher usage data, along with a methodology for examining an inquiry‐based science programme. The paper uses data from a medium‐sized school district, comprising 53 schools, 1,026 teachers, and nearly one‐third of a million curriculum visits during the 2012–2013 school year. On their own, learning analytics data paint an incomplete picture of what teachers do in their classrooms with the curriculum. Surveys, interviews, and classroom observation can be used to contextualize this data. There are several hurdles that must be overcome to ensure that new data collection and analysis tools fulfill their promise. Schools and districts must address the gap in access to technology. While some schools have achieved one‐to‐one computing, most schools are not even close to this goal, and this has profound implications for their ability to collect reliable analytics data. Conversations with and observations of teachers reveal that teachers and students often share accounts, and that students are limited in the activities they can complete online. This means that analytics data may not prove to be reliable.

Citation: Monroy, Carols, Snodgrass Rangel, Virginia, & Whitaker, Reid. (2014). A strategy for incorporating learning analytics into the design and evaluation of a K-12 science curriculum. Journal of Learning Analytics, 1(2), 94-125. | Url: http://epress.lib.uts.edu.au/journals/index.php/JLA/issue/archive

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

This study used main path analysis on Wikiversity data to identify core topics in the areas of biology and electrical engineering. The results could be useful for the Wikiversity community as a whole. They point to the need for better coordination of the disparate topics in a domain. They could be used to orient participants by showing them the importance of the topic they are working on, and by revealing important reference points to other core topics in the field. Seeing the main paths could prompt a new contributor to add to an existing strand of knowledge development or to start a new peripheral one. Advanced participants in the community might benefit from the analysis as an historical reconstruction of the shared knowledge-building process, comparing their own visions and goals with the actual knowledge development of the community and identifying topical gaps. The paper presents analysis steps in detail, including a description of a tool environment that is designed for flexible use by noncomputer experts.

Citation: Halatchliyski, Iassen, Hecking, Tobias, Gohnert, Tilman, & Hoppe, H Ulrich. (2014). Analyzing the main paths of knowledge evolution and contributor roles in an open learning community. Journal of Learning Analytics, 1(2), 72-93. | Url: http://epress.lib.uts.edu.au/journals/index.php/JLA/issue/archive

Type: Evidence | Proposition: D: Ethics | Polarity: | Sector:

There is growing concern about the extent to which individuals are tracked while online. Within higher education, understanding of issues surrounding student attitudes to privacy is influenced not only by the apparent ease with which members of the public seem to share the detail of their lives, but also by the traditionally paternalistic institutional culture of universities.

This paper explores issues around consent and opting in or out of data tracking. It considers how three providers of massive open online courses (Coursera, EdX and FutureLearn) inform users about data usage. It also discusses how higher education institutions can work toward an approach that engages students and informs them in more detail about the implications of learning analytics on their personal data.

The paper restates the need to

  1. develop a coherent approach to consent, taking into account the findings of research into how people make decisions about personal data
  2. recognise that people can only engage selectively in privacy self management
  3. adjust the timing of privacy law to take into account that data may be combined and reanalysed in the future
  4. develop more substantive privacy rules

This paper was nominated for the best paper award at the 2015 Learning Analytics and Knowledge conference.

Citation: Prinsloo, Paul, & Slade, Sharon. (2015). Student privacy self-management: implications for learning analytics. Paper presented at the LAK '15, Poughkeepsie, NY. DOI 10.1145/2723576.2723585 | Url: http://dl.acm.org/citation.cfm?id=2723576