The widespread adoption of Learning Analytics (LA) and Educational Data Mining (EDM) has somewhat stagnated recently, and in some prominent cases even been reversed following concerns by governments, stakeholders and civil rights groups. In this ongoing discussion, fears and realities are often indistinguishably mixed up, leading to an atmosphere of uncertainty among potential beneficiaries of Learning Analytics, as well as hesitations among institutional managers who aim to innovate their institution's learning support by implementing data and analytics with a view on improving student success. In this paper, we try to get to the heart of the matter, by analysing the most common views and the propositions made by the LA community to solve them. We conclude the paper with an eight-point checklist named DELICATE that can be applied by researchers, policy makers and institutional managers to facilitate a trusted implementation of Learning Analytics.
The majority of the learning analytics research focuses on the prediction of course performance and modeling student behaviors with a focus on identifying students who are at risk of failing the course. Learning analytics should have a stronger focus on improving the quality of learning for all students, not only identifying at risk students. In order to do so, we need to understand what successful patterns look like when reflected in data and subsequently adjust the course design to avoid unsuccessful patterns and facilitate successful patterns.
However, when establishing these successful patterns, it is important to account for individual differences among students since previous research has shown that not all students engage with learning resources to the same extent. Regulation strategies seem to play an important role in explaining the different usage patterns students' display when using digital learning recourses. When learning analytics research incorporates contextualized data about student regulation strategies we are able to differentiate between students at a more granular level.
The current study examined if regulation strategies could account for differences in the use of various learning resources. It examines how students regulated their learning process and subsequently used the different learning resources throughout the course and established how this use contributes to course performance.
The results show that students with different regulation strategies use the learning resources to the same extent. However, the use of learning resources influences course performance differently for different groups of students. This paper recognizes the importance of contextualization of learning data resources with a broader set of indicators to understand the learning process. With our focus on differences between students, we strive for a shift within learning analytics from identifying at risk students towards a contribution of learning analytics in the educational design process and enhance the quality of learning; for all students.
This paper talks about the introduction of a tool, named Network Awareness Tool, for the investigation of informal learning at workplace. The importance of tools like this is high, considering that "Informal learning is an important driver for professional development and workplace learning. However [...], there is a problem when it comes to making it a real asset within organizations: Informal learning activities are mostly invisible to others, sometimes the learners themselves might not even be aware of the learning that occurs. As a consequence informal learning in organizations goes undetected, remains off the radar of HR departments and is therefore hard to asses, manage and value".
In order to test the tool, a taget group has been selected, composed by teaching professionals working in school organizations. The tool is based on different theories:
- Networked Learning Theory: "Networked Learning Theory is an emerging perspective that tries to understand learning by asking the question how people develop and maintain a ‘web’ of social relations used for their own and reciprocal learning and professional development".
- Social Network Theory: "Social Network Theory asserts that the constitution of a network may influence the accessibility of information and resources and that the social structure may offer potential for the exchange of resources". The structure and the dimensions of a social network can be analysed by Social Network Analysis.
- Social Network Analysis: "According Social Network Analysis a network consists of nodes and ties. Nodes are the individual actors within a network and ties are the relationships between the actors. The impact of the structure of social networks can be studied on three levels: first the positions people have in a network (individual dimension), the relational level (ties dimension) and finally the overall network structure (network
- Social Capital Theory: this theory concerns "the relational resources embedded in social ties and how actors interact to gain access to these resources".
- Communities of Practice: "the collective advancement of knowledge and the development of shared identities comes together in the community aspect of social learning, which we base on the well known concept of communities of practice".
- Individual demographics: this is an important aspect to be taken into accounts, considering that "age and years of experience can also have an impact on teachers’ professional development. Senior employees tend to take less initiatives in their professional development".
Thanks to Learning Analytics functionalities, Network Awareness Tool can depict the "actors" involved in the social network and to track the quality and the nature of ties between actors. Also the contents of ties can be tracked using meta-tags. Using Social Network Analysis, Network Awareness Tool can design the social network structure and density, indicating to HR department what topic are particularly relevant for informal learning at workplace.
This literature paper briefly anticipates and outlines a project about integrating labour market information in a learning analytics goal-setting application, with the aim to help student to develop skills that are currently requested by the labour market.
Authors highlight the absurd contemporary trend that characterise labour market, with high levels of youth unemployment together with difficulties of companies to find candidates with the right job skills.
In the developed IT solution, abour market data will be analyzed to extract information that may impact student planning. This will lead to a "goal-setting program in which students can specify goals (e.g., learning goals as subgoals of more distant career goals)" and have access to "relevant labour market information, view progress and success indicators, and receive recommendations from course advisors".
According to authors, "this research expands the impact of learning analytics beyond the educational setting by helping students to navigate the education-to-employment pathway using a goal-setting application" characterized by analytical methods joint with an examination of learning constructs and educational theories.
This study aims to develop a recommender system for social learning platforms that combine traditional learning management systems with commercial social networks like Facebook. We therefore take into account social interactions of users to make recommendations on learning resources. We propose to make use of graph-walking methods for improving performance of the well-known baseline algorithms. We evaluate the proposed graph-based approach in terms of their F1 score, which is an effective combination of precision and recall as two fundamental metrics used in recommender systems area. The results show that the graph-based approach can help to improve performance of the baseline recommenders; particularly for rather sparse educational datasets used in this study.
This longitudinal study explores the effects of tracking and monitoring time devoted to learn with a mobile tool, on self-regulated learning. Graduate students (n = 36) from three different online courses used their own mobile devices to track how much time they devoted to learn over a period of four months. Repeated measures of the Online Self-Regulated Learning Questionnaire and Validity and Reliability of Time Management Questionnaire were taken along the course. Our findings reveal positive effects of tracking time on time management skills. Variations in the channel, content and timing of the mobile notifications to foster reflective practice are investigated, and time-logging patterns are described. These results not only provide evidence of the benefits of recording learning time, but also suggest relevant cues on how mobile notifications should be designed and prompted towards self-regulated learning of students in online courses.
Evidence of the month: July 2015
This paper focuses on the issues of stability and sensitivity of learning-analytics-based prediction models. It investigates whether prediction models remain the same, when the instructional context is repeated with a new cohort of students, and whether prediction models change when relevant aspects of the instructional context are adapted.
The research applies Buckingham Shum and Deakin Crick’s theoretical framework of dispositional learning analytics: an infrastructure that combines learning dispositions data with data extracted from computer-assisted, formative assessments and learning management systems.
The paper compares two cohorts of a large introductory quantitative methods module, with 1,005 students in the 2013-2014 cohort, and 1,006 students in the 2014-2015 cohort. Both modules were based on principles of blended learning – combining face-to-face problem-based learning sessions with e-tutorials – and had a similar instructional design, except for an intervention into the design of quizzes administered in the module. Focusing on their predictive power, the paper provides evidence of the stability and the sensitivity of regression-type prediction models.
The paper reports that the value of dispositional data is strongly dependent on the time at which richer (assessment) data become available, and on the need for timely signalling of under performance. If timely feedback is required, the combination of data extracted from e-tutorials (both in practice and test modes) and learning disposition data was found to be the best mix to support learning analytics applications.
Feedback related to learning dispositions (for example, by flagging suboptimal learning strategies, or inappropriate learning regulation) is generally open to interventions to improve the learning process. The same is true of feedback related to suboptimal use of e-tutorials; it is both predictive and open for intervention.
This paper won Best Paper award at the 7th International conference on Computer Supported Education
There is no proven direct effect between the availability of this form of learning analytics and learning outcomes. Leeuwen, A. van, Janssen, J., Erkens, G. (2014) Effecten van learning analytics bij computer ondersteund samenwerkend leren, Kennisnet, 4W Issue 4-2014
4W Issue 4-2014
When students work together in a computer-assisted learning environment, it is difficult to get a good picture of the progress of the groups. Learning analytics, where data on the learning activities of students are displayed graphically, helps teachers to better guide the groups and make students aware of the cooperation process. The expectation is that insight into the course of the cooperation process contributes not only to better cooperation, but also results in better educational results.
It is of all times: in a group assignment one of the group members is taking the task not serious and as a result a part of the assignment stagnates and / or more work ends up on the shoulders of others. As a teacher you hear this often afterwards, when the assignment has already been submitted. In group assignments in a computer-assisted learning environment these kind of obstacles already should have come to light in an earlier stage as the teacher can watch the cooperation process, thanks to the information gathered by the computer system on activities of students. In practice, however, the amount of information collected is so large that teachers cannot see the trees from the forest. By visualizing relevant data, the information is much more manageable: students see from each other how much everyone contributes and the teacher understands the process. This graphical representation of student activities is a form of learning analytics.
Effect of learning analytics on guidance by the teacher
Teachers can get feed back from the system on the activities of the students in the form of visualizations as well. Can they can better assist students with this information (arrow 3 -> 4 in Figure 1)? Our research on the Participation Tool (Van Leeuwen et al.) shows that teachers, thanks to information of the individual contributions of group members, can better detect imbalances in the groups and act on them. Then they seem to send relatively more messages to groups where participation is problematic then to groups in which this is not the case. In addition, their assessment of the participation is more specific because they can accurately identify which group member does not participate proportionally.
In a study completed at the time, teachers were given information about which task-relevant key words students use in their discussions (Draft Trail Tool) and to which extent they made progress on the task. The result was that teachers in general were sending more messages, and again often to groups with task problems, for instance, when there was discussions about things other than the subject of the task.
The presence of learning analytics not only seems to assist teachers in keeping an eye on the problems, but also to activate them to provide help to groups with problems.
Effect of learning analytics on learning outcomes
In summary, we can conclude that learning analytics have beneficial effects on the course of collaboration. Understanding participation ensures that the attitude of students changes and becomes more active, as well as the guidance by the teachers. Reasoning back, it was expected that this would also lead to better learning outcomes (arrow 5 in Figure 1). Is that the case? No, this expectation is to date not supported by research: there is no proven direct effect between the availability of this form of learning analytics and learning outcomes. All students have a better understanding of the participation of their group members, and although teachers can guide the groups better on the basis of learning analytics, it does not provide better group outcomes, nor a better individual performance. Therewith the added value of learning analytics is demonstrable for the cooperation process, but not for the learning outcomes.
Digital student monitoring systems (DSMS) usage results, in comparison to other interventions aimed at small groups of pupils, in a positive and relatively large impact on learning outcomes.
More and more Dutch schools use digital student management systems (Inspectorate of Education, 2013). Also in other countries, and continents schools increasingly make use of such systems, e.g., as a result of the increase in assessment tests (Heritage & Yeagley, 2005). Digital student monitoring systems (DSMSs) can be defined as systems through which teachers receive feedback based on test results about the results of the education provided.
The information received by teachers on the basis of assessment tests can be considered as feedback to teachers about the results of his or her teaching (Visscher & Coe, 2002). Teachers can adjust their instructions based on this feedback so that the teaching is tailored to the specific learning needs of students. When this is the case, the use of a DSMS can result in higher learning outcomes. Both governments and schools therefore heavily invest in DSMSs (Ministry of Education, Culture and Science, 2007). It is therefore important to investigate whether the use of DSMSs actually leads to higher learning results.
In this research, the results of a meta-analysis are described in which this connection has been investigated. For this meta-analysis qualitatively strong experimental studies on the effects of DSMSs are selected. The following questions have been answered with the analysis:
- What is the effect of the use of digital student management systems by teachers on student achievements?
- What factors hinder or facilitate the intended effect of the use of digital student management systems on student performance?
In the meta-analysis 40 effects are included derived from fifteen different studies. The studies have been found in one of the six databases. A systematic search for relevant studies has been conducted on the basis of predefined keyword lists. Also 126 international contacts were approached and asked if they knew of other relevant studies, or performed relevant studies themselves. In the databases 38 studies were found, and 32 studies were found by accessing the contacts. The studies were read and evaluated by two researchers who eventually determined that fifteen studies met all predefined content and methodological criteria.
The effects of the selected studies were calculated using the Cohen's d and Hedges's g formulas.
Each effect had a weight attached that determined how influencing the corresponding effects were in the average effect size. These weights were assigned based on the variance within the effects. Use has been made of a random effect model for the determination of the average of the effect size.
The analyzes were performed with the program Comprehensive Meta-analyzes.
In the studies in which the aim of the intervention was to increase the learning outcomes of small groups of students (for example, the intervention was directed at a specific number of students within a class) a significant effect size of 0.4 was found (see Table 5). DSMS usage results, in comparison to other interventions aimed at small groups of pupils, in a positive and relatively large impact on learning outcomes. In the studies where the intervention was aimed at increasing the learning outcomes of all students within an entire school or school board a significant effect size 0.06 has been found (see Table 5). DSMS usage, in comparison to other interventions aimed at schools, results in a slightly lower than average effect on learning outcomes.
The following factors seem to enhance the intended effect of a DSMS on outcomes:
- high feedback frequency,
- systems that apart from feedback also provide advice on instruction and processing methods that match the feedback received,
- a supportive intervention which takes place at least monthly.
The on average high effect of a DSMS in studies with intervention aimed at small groups was not found in the studies of interventions aimed at schools or school boards. It is therefore worth further investigating how the successful approach of the use of DSMS for small groups of students can be translated into approaches at school or school board level resulting in comparable high effects.
Causes of stagnation or even growth in the development of pupils are not in-depth researched, resulting in the effect that no appropriate plans can be developed or implemented.
In general primary schools have one or more digital systems with which they can register and monitor learning performance of students. The Ministry of Education, Culture and Science credited an important function to these student management systems. Conducting tests and analyzing the learning performance should be an important tool to improve education and improve learning outcomes (Ministry of Education, Culture and Science, 2007). Research by the Inspectorate of Education shows that schools who analyze their learning performance, and then adjust their teaching based on the results of these analyzes, indeed achieve higher output (Inspectorate of Education, 2010). Furthermore, in international literature the positive influence of using and analyzing student performance on the learning outcomes of students is supported (Feldman & Tung, 2001; Johnson, 2002).
To increase output, it is important that schools and school boards use the student management systems and correctly interpret the outcomes of analyzes, only then the results of analyzes can be used to improve education and output.
Various researchers, however, show that schools do not optimally use the possibilities of these digital student management systems (Meijer, Ledoux & Elshof, 2011; Visscher, Peters & Staman, 2010). Therefore, the purpose of this report is to provide information about the possibilities of digital student management systems, and on how the use of a digital student management system can lead to higher learning outcomes. Teachers, school boards and educational administrators can use the contents of this publication to assess and improve their own student management system (for this purpose included in the annexes is an instrument by means of which schools can determine their level of student management systems).
This report (in Dutch) also discusses and explains the following points:
- why and how a student management system within a approach can result in better learning outcomes,
- The specific support that digital student management systems can provide within result-oriented working (Hd. 2)
- the current use student management systems at schools or school boards, and the conditions for student management systems (Hd. 3)
- the various stages of use of student management systems (Hd. 4)
- and finally, three inspiring examples of student management systems are described (Hd.5).
In compiling this report use has been made of the knowledge and experience from the Focus project of the University of Twente. In this project, 150 primary schools are trained in result-oriented working; the use of student management systems plays an important role.
This report describes what results-oriented working entails and how student monitoring systems can be supportive. Schools and school boards work result-oriented when they work systematically and purposefully with the aim of maximizing student achievement (Inspectorate of Education, 2010).
Teachers, schools and school boards need feedback on their actions and performance as a basis to monitor and improve their education and performance. The LVS (student management system) provides schools and school boards with important feedback. With LVS, and the associated Cito LVS tests they can receive feedback on the performance and development of individual students, groups of students and schools. To examine in-depth the causes of a stagnation, or even growth in the development of students, it is necessary to obtain other information with other instruments. This can, among others, be done by means of the analysis of the method-dependent tests, diagnostic interviews with students, but also with the aid of class observations.
However, research shows that schools are struggling with this last point. Causes of stagnation or even growth in the development of pupils are not in-depth researched, resulting in the effect that no appropriate plans can be developed or implemented.
If this is the case, then the LVS is only used as a measuring and viewing instrument, and not as a tool to improve educational practices. Often, a first awareness is needed to see that assessment data (also) say something about the quality of education, and that these data represent an important starting point for customizing and improving education.
In the report six stages of LVS have been formulated. These stages emerge from continued experiences within the Focus project, the steps are not supported empirically. The three practical examples indicate, however, that the structure of the pyramid can be retrieved within schools, and then in particular the distinction between the measurement and check phase and the phase where use leads to changes in teaching practice. In addition, these practical examples show that there is a tension between stage five and stage six. In stage five the school administrators use the LVS, amongst others, as an instrument with which they keep an eye on the performance of teachers, if this is not done properly it can lead to a blame culture.
When this is the case, the sixth stage of self-reflection, in which teachers from their own initiative systematically use performance feedback for their professional development, will not be reached. Furthermore, the use of LVS within a result-oriented approach can then result in undesirable side effects. In this way the focus on the learning outcomes can result in an increased pressure on students, teachers, schools and school boards which results in teaching to the test. Even fraud with assessment data is not inconceivable. Teachers and administrators must therefore be alert on the occurrence of such negative effects. Thus, it is important that a school culture is created in which teachers themselves can openly and safely discuss the results of their work, not as a basis for controlling the use of each other's actions, but to improve teaching quality based on quality feedback.
If we know better what (different) students need for their development and can better connect to these needs, the outcome will be that students also will learn more, which is ultimately what matters in result-oriented working.