This paper discusses a scalable approach for integrating learning analytics into an online K–12 science curriculum. A description of the curriculum and the underlying pedagogical framework is followed by a discussion of the challenges to be tackled as part of this integration. The paper includes examples of data visualization based on teacher usage data, along with a methodology for examining an inquiry‐based science programme. The paper uses data from a medium‐sized school district, comprising 53 schools, 1,026 teachers, and nearly one‐third of a million curriculum visits during the 2012–2013 school year. On their own, learning analytics data paint an incomplete picture of what teachers do in their classrooms with the curriculum. Surveys, interviews, and classroom observation can be used to contextualize this data. There are several hurdles that must be overcome to ensure that new data collection and analysis tools fulfill their promise. Schools and districts must address the gap in access to technology. While some schools have achieved one‐to‐one computing, most schools are not even close to this goal, and this has profound implications for their ability to collect reliable analytics data. Conversations with and observations of teachers reveal that teachers and students often share accounts, and that students are limited in the activities they can complete online. This means that analytics data may not prove to be reliable.
Affect and behaviour detectors were used to estimate student affective states and behaviour based on analysis of tutor log data. For every student action, the detectors estimated the probability that the student was in a state of boredom, engaged concentration, confusion, or frustration. They also estimated the probability that the student was engaging in off‐task or gaming behaviours.
- Boredom during problem solving was negatively correlated with performance
- Boredom during scaffolded tutoring was positively correlated with performance
- Confusion during problem solving was negatively correlated with performance
- Confusion during scaffolded tutoring was positively correlated with performance
- Engaged concentration was associated with positive learning outcomes
- Frustration was associated with positive learning outcomes
- Gaming the system was associated with poorer learning,
A unified model was used to predict student standardized examination scores from a combination of student affect, disengaged behaviour, and performance within the learning system. This model achieved high overall correlation with standardized exam scores.
Building on this work could produce positive evidence for ‘Proposition A: learning analytics improve learning outcomes’ and ‘Proposition B: Learning analytics improve learning support and teaching’.
There is no proven direct effect between the availability of this form of learning analytics and learning outcomes. Leeuwen, A. van, Janssen, J., Erkens, G. (2014) Effecten van learning analytics bij computer ondersteund samenwerkend leren, Kennisnet, 4W Issue 4-2014
4W Issue 4-2014
When students work together in a computer-assisted learning environment, it is difficult to get a good picture of the progress of the groups. Learning analytics, where data on the learning activities of students are displayed graphically, helps teachers to better guide the groups and make students aware of the cooperation process. The expectation is that insight into the course of the cooperation process contributes not only to better cooperation, but also results in better educational results.
It is of all times: in a group assignment one of the group members is taking the task not serious and as a result a part of the assignment stagnates and / or more work ends up on the shoulders of others. As a teacher you hear this often afterwards, when the assignment has already been submitted. In group assignments in a computer-assisted learning environment these kind of obstacles already should have come to light in an earlier stage as the teacher can watch the cooperation process, thanks to the information gathered by the computer system on activities of students. In practice, however, the amount of information collected is so large that teachers cannot see the trees from the forest. By visualizing relevant data, the information is much more manageable: students see from each other how much everyone contributes and the teacher understands the process. This graphical representation of student activities is a form of learning analytics.
Effect of learning analytics on guidance by the teacher
Teachers can get feed back from the system on the activities of the students in the form of visualizations as well. Can they can better assist students with this information (arrow 3 -> 4 in Figure 1)? Our research on the Participation Tool (Van Leeuwen et al.) shows that teachers, thanks to information of the individual contributions of group members, can better detect imbalances in the groups and act on them. Then they seem to send relatively more messages to groups where participation is problematic then to groups in which this is not the case. In addition, their assessment of the participation is more specific because they can accurately identify which group member does not participate proportionally.
In a study completed at the time, teachers were given information about which task-relevant key words students use in their discussions (Draft Trail Tool) and to which extent they made progress on the task. The result was that teachers in general were sending more messages, and again often to groups with task problems, for instance, when there was discussions about things other than the subject of the task.
The presence of learning analytics not only seems to assist teachers in keeping an eye on the problems, but also to activate them to provide help to groups with problems.
Effect of learning analytics on learning outcomes
In summary, we can conclude that learning analytics have beneficial effects on the course of collaboration. Understanding participation ensures that the attitude of students changes and becomes more active, as well as the guidance by the teachers. Reasoning back, it was expected that this would also lead to better learning outcomes (arrow 5 in Figure 1). Is that the case? No, this expectation is to date not supported by research: there is no proven direct effect between the availability of this form of learning analytics and learning outcomes. All students have a better understanding of the participation of their group members, and although teachers can guide the groups better on the basis of learning analytics, it does not provide better group outcomes, nor a better individual performance. Therewith the added value of learning analytics is demonstrable for the cooperation process, but not for the learning outcomes.
Digital student monitoring systems (DSMS) usage results, in comparison to other interventions aimed at small groups of pupils, in a positive and relatively large impact on learning outcomes.
More and more Dutch schools use digital student management systems (Inspectorate of Education, 2013). Also in other countries, and continents schools increasingly make use of such systems, e.g., as a result of the increase in assessment tests (Heritage & Yeagley, 2005). Digital student monitoring systems (DSMSs) can be defined as systems through which teachers receive feedback based on test results about the results of the education provided.
The information received by teachers on the basis of assessment tests can be considered as feedback to teachers about the results of his or her teaching (Visscher & Coe, 2002). Teachers can adjust their instructions based on this feedback so that the teaching is tailored to the specific learning needs of students. When this is the case, the use of a DSMS can result in higher learning outcomes. Both governments and schools therefore heavily invest in DSMSs (Ministry of Education, Culture and Science, 2007). It is therefore important to investigate whether the use of DSMSs actually leads to higher learning results.
In this research, the results of a meta-analysis are described in which this connection has been investigated. For this meta-analysis qualitatively strong experimental studies on the effects of DSMSs are selected. The following questions have been answered with the analysis:
- What is the effect of the use of digital student management systems by teachers on student achievements?
- What factors hinder or facilitate the intended effect of the use of digital student management systems on student performance?
In the meta-analysis 40 effects are included derived from fifteen different studies. The studies have been found in one of the six databases. A systematic search for relevant studies has been conducted on the basis of predefined keyword lists. Also 126 international contacts were approached and asked if they knew of other relevant studies, or performed relevant studies themselves. In the databases 38 studies were found, and 32 studies were found by accessing the contacts. The studies were read and evaluated by two researchers who eventually determined that fifteen studies met all predefined content and methodological criteria.
The effects of the selected studies were calculated using the Cohen's d and Hedges's g formulas.
Each effect had a weight attached that determined how influencing the corresponding effects were in the average effect size. These weights were assigned based on the variance within the effects. Use has been made of a random effect model for the determination of the average of the effect size.
The analyzes were performed with the program Comprehensive Meta-analyzes.
In the studies in which the aim of the intervention was to increase the learning outcomes of small groups of students (for example, the intervention was directed at a specific number of students within a class) a significant effect size of 0.4 was found (see Table 5). DSMS usage results, in comparison to other interventions aimed at small groups of pupils, in a positive and relatively large impact on learning outcomes. In the studies where the intervention was aimed at increasing the learning outcomes of all students within an entire school or school board a significant effect size 0.06 has been found (see Table 5). DSMS usage, in comparison to other interventions aimed at schools, results in a slightly lower than average effect on learning outcomes.
The following factors seem to enhance the intended effect of a DSMS on outcomes:
- high feedback frequency,
- systems that apart from feedback also provide advice on instruction and processing methods that match the feedback received,
- a supportive intervention which takes place at least monthly.
The on average high effect of a DSMS in studies with intervention aimed at small groups was not found in the studies of interventions aimed at schools or school boards. It is therefore worth further investigating how the successful approach of the use of DSMS for small groups of students can be translated into approaches at school or school board level resulting in comparable high effects.
Causes of stagnation or even growth in the development of pupils are not in-depth researched, resulting in the effect that no appropriate plans can be developed or implemented.
In general primary schools have one or more digital systems with which they can register and monitor learning performance of students. The Ministry of Education, Culture and Science credited an important function to these student management systems. Conducting tests and analyzing the learning performance should be an important tool to improve education and improve learning outcomes (Ministry of Education, Culture and Science, 2007). Research by the Inspectorate of Education shows that schools who analyze their learning performance, and then adjust their teaching based on the results of these analyzes, indeed achieve higher output (Inspectorate of Education, 2010). Furthermore, in international literature the positive influence of using and analyzing student performance on the learning outcomes of students is supported (Feldman & Tung, 2001; Johnson, 2002).
To increase output, it is important that schools and school boards use the student management systems and correctly interpret the outcomes of analyzes, only then the results of analyzes can be used to improve education and output.
Various researchers, however, show that schools do not optimally use the possibilities of these digital student management systems (Meijer, Ledoux & Elshof, 2011; Visscher, Peters & Staman, 2010). Therefore, the purpose of this report is to provide information about the possibilities of digital student management systems, and on how the use of a digital student management system can lead to higher learning outcomes. Teachers, school boards and educational administrators can use the contents of this publication to assess and improve their own student management system (for this purpose included in the annexes is an instrument by means of which schools can determine their level of student management systems).
This report (in Dutch) also discusses and explains the following points:
- why and how a student management system within a approach can result in better learning outcomes,
- The specific support that digital student management systems can provide within result-oriented working (Hd. 2)
- the current use student management systems at schools or school boards, and the conditions for student management systems (Hd. 3)
- the various stages of use of student management systems (Hd. 4)
- and finally, three inspiring examples of student management systems are described (Hd.5).
In compiling this report use has been made of the knowledge and experience from the Focus project of the University of Twente. In this project, 150 primary schools are trained in result-oriented working; the use of student management systems plays an important role.
This report describes what results-oriented working entails and how student monitoring systems can be supportive. Schools and school boards work result-oriented when they work systematically and purposefully with the aim of maximizing student achievement (Inspectorate of Education, 2010).
Teachers, schools and school boards need feedback on their actions and performance as a basis to monitor and improve their education and performance. The LVS (student management system) provides schools and school boards with important feedback. With LVS, and the associated Cito LVS tests they can receive feedback on the performance and development of individual students, groups of students and schools. To examine in-depth the causes of a stagnation, or even growth in the development of students, it is necessary to obtain other information with other instruments. This can, among others, be done by means of the analysis of the method-dependent tests, diagnostic interviews with students, but also with the aid of class observations.
However, research shows that schools are struggling with this last point. Causes of stagnation or even growth in the development of pupils are not in-depth researched, resulting in the effect that no appropriate plans can be developed or implemented.
If this is the case, then the LVS is only used as a measuring and viewing instrument, and not as a tool to improve educational practices. Often, a first awareness is needed to see that assessment data (also) say something about the quality of education, and that these data represent an important starting point for customizing and improving education.
In the report six stages of LVS have been formulated. These stages emerge from continued experiences within the Focus project, the steps are not supported empirically. The three practical examples indicate, however, that the structure of the pyramid can be retrieved within schools, and then in particular the distinction between the measurement and check phase and the phase where use leads to changes in teaching practice. In addition, these practical examples show that there is a tension between stage five and stage six. In stage five the school administrators use the LVS, amongst others, as an instrument with which they keep an eye on the performance of teachers, if this is not done properly it can lead to a blame culture.
When this is the case, the sixth stage of self-reflection, in which teachers from their own initiative systematically use performance feedback for their professional development, will not be reached. Furthermore, the use of LVS within a result-oriented approach can then result in undesirable side effects. In this way the focus on the learning outcomes can result in an increased pressure on students, teachers, schools and school boards which results in teaching to the test. Even fraud with assessment data is not inconceivable. Teachers and administrators must therefore be alert on the occurrence of such negative effects. Thus, it is important that a school culture is created in which teachers themselves can openly and safely discuss the results of their work, not as a basis for controlling the use of each other's actions, but to improve teaching quality based on quality feedback.
If we know better what (different) students need for their development and can better connect to these needs, the outcome will be that students also will learn more, which is ultimately what matters in result-oriented working.
The right mix of forms of control (in adaptive learning systems E.B.) can provide a positive contribution to the learning process of learners.
In this research report (in Dutch), the influence of digital learning resources with adaptive features and capabilities on student learning is assessed through a qualitative, exploratory research design.
In light of the introduction of Tailored education in August 2014 and the high expectations of the role that ICT could play, this report tries to provide insights into the extent to which certain adaptive characteristics and capabilities of digital programs can have a positive effect on student learning.
The study was conducted at 37 students divided between middle and upper primary education and class 2 HAVO / VWO of secondary education.
In the study 12 girls and 25 boys participated. Regarding the digital learning resources, the choice has been made for four digital learning resources / programs and learning tasks in the field of Language / Dutch.
The general question of the research is as follows:
What impact do adaptive characteristics and capabilities of digital learning resources have on student learning?
From the findings regarding the theoretical framework, the following research questions were formulated.
Regarding the theoretical framework following conclusions are drawn:
- Little is known about the effects of adaptive digital learning resources
- The effects seem to be especially in acquiring knowledge of highly structured knowledge domains and less in the acquisition of abstract and complex competencies
- A form of shared control seems to be preferred over full learner control or complete control program
- Teachers attach importance to adaptive learning systems in which students work independently and in which feedback and reflection are seen as key instructional goals
- Especially elaborated feedback seems to have been effective.
View the complete summary (in Dutch): http://www.kennisnet.nl/onderzoek/alle-onderzoeken/adaptiviteit-van-digitale-leermiddelen/
The Go-Lab federation of online labs opens up virtual laboratories (simulation), remote laboratories (real equipment accessible at distance) and data sets from physical laboratory experiments (together called “online labs”) for large-scale use in education. In this way, Go-Lab enables inquiry-based learning that promotes acquisition of deep conceptual domain knowledge and inquiry skills, with the further intent of interesting students in careers in science. For students, Go-Lab offers the opportunity to perform scientific experiments with online labs in pedagogically structured learning spaces. Go-Lab’s inquiry learning spaces (ILSs) structure the students’ inquiry process through an inquiry cycle and provide students with guidance in which dedicated (and connected) scaffolds for inquiry processes play a pivotal role. Teachers can create and adapt inquiry learning phases and the associated guidance in an ILS through a simple wiki-like interface and can add scaffolds and tools to an ILS using a straightforward drag and drop feature. Teachers can also adapt scaffolds and tools (e.g., change the language or the concepts available in a concept mapper) through an “app composer”. In creating ILSs, teachers are supported by scenarios and associated defaults ILSs that can be used as a starting point for development. In addition, teachers are offered a community framework to disseminate best practices and find mutual support. For lab-owners, Go-Lab provides open interfacing solutions for easily plugging in their online labs and sharing them in the Go-Lab federation of online labs. In its first year, Go-Lab created ILSs for thirteen online labs from different lab providers, including renowned research organizations (e.g., CERN, ESA) that participate in the consortium. The design of these inquiry learning spaces has been evaluated through mock-ups and prototypes with students and teachers. More advanced and later versions will be evaluated and validated in large scale pilots. The sustainability of Go-Lab will come from the opportunity for the larger science education community to add new online labs and share ILSs. An open and Web-based community will capitalize on the “collective intelligence” of students, teachers, and scientists.
Abstract: Understanding the behavior of learners within learning applications and analyzing the factors that may influence the learning process play a key role in designing and optimizing learning applications. In this work we focus on a specific application named “1x1 trainer” that has been designed for primary school children to learn one digit multiplications. We investigate the database of learners’ answers to the asked questions (N > 440,000) by applying the Markov chains. We want to understand whether the learners’ answers to the already asked questions can affect the way they will answer the subsequent asked questions and if so, to what extent. Through our analysis we first identify the most difficult and easiest multiplications for the target learners by observing the probabilities of the different answer types. Next we try to identify influential structures in the history of learners’ answers considering the Markov chain of different orders. The results are used to identify pupils who have difficulties with multiplications very soon (after couple of steps) and to optimize the way questions are asked for each pupil individually.
We analyze log-data generated by an experiment with Fractions Tutor, an intelligent tutoring system. The experiment compares the educational eectiveness of instruction with single and multiple graphical representations. We extract the error-making and hint-seeking behaviors of each student to characterize their learning strategy. Using an expectation-maximization approach, we cluster the students by learning strategy. We nd that a) experimental condition
and learning outcome are clearly associated b) experimental condition and learning strategy are not, and c) almost all of the association between experimental condition and learn-
ing outcome is found among students implementing just one of the learning strategies we identify. This class of students is characterized by relatively high rates of error as well as a
marked reluctance to seek help. They also show the greatest educational gains from instruction with multiple rather than single representations. The behaviors that characterize this
group illuminate the mechanism underlying the eectiveness of multiple representations and suggest strategies for tailoring instruction to individual students. Our methodology can
be implemented in an on-line tutoring system to dynamically tailor individualized instruction.
The paper gives some positive support for LACE hypothesis B in a particular context, that of multiple graphical representations in maths and science learning in an intelligent tutoring system by promoting the tailoring of individual instruction to meet the student's needs.
In this paper, we incorporate scaffolding and change of tutor context within the Bayesian Knowledge Tracing (BKT) framework to track students’ developing inquiry skills. These skills are demonstrated as students experiment within interactive simulations for two science topics. Our aim is twofold. First, we desire to improve the models’ predictive performance by adding these factors. Second, we aim to interpret these extended models to reveal if our scaffolding approach is effective, and if inquiry skills transfer across the topics. We found that incorporating scaffolding yielded better predictions of individual students’ performance over the classic BKT model. By interpreting our models, we found that scaffolding appears to be effective at helping students acquire these skills, and that the skills transfer
This paper reports research using Bayesian Knowledge Tracing to predict student performance in inquiry process skills across science topics (particularly data collection). The main focus is on the accuracy of the prediction but they also use it to evaluate the effectiveness of scaffolding offered. This could be used positive evidence for the LACE hypothesis B – improving teaching.