Archives

Type: Evidence | Proposition: C: Uptake | Polarity: | Sector: | Country:

We designed and studied an innovative semantic visual learning analytics for orchestrating today's programming classes. The visual analytics integrates sources of learning activities by their content semantics. It automatically processs paper-based exams by associating sets of concepts to the exam questions. Results indicated the automatic concept extraction from exams were promising and could be a potential technological solution to address a real world issue. We also discovered that indexing effectiveness was especially prevalent for complex content by covering more comprehensive semantics. Subjective evaluation revealed that the dynamic concept indexing provided teachers with immediate feedback on producing more balanced exams.

Citation: Sharon Hsiao, Sesha Kumar Pandhalkudi Govindarajan and Yiling Lin (2016). "Semantic Visual Analytics for Today's Programming Courses". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: C: Uptake | Polarity: | Sector: | Country:

Researchers invested in K-12 education struggle not just to enhance pedagogy, curriculum, and student engagement, but also to harness the power of technology in ways that will optimize learning. Online learning platforms offer a powerful environment for educational research at scale. The present work details the creation of an automated system designed to provide researchers with insights regarding data logged from randomized controlled experiments conducted within the ASSISTments TestBed. The Assessment of Learning Infrastructure (ALI) builds upon existing technologies to foster a symbiotic relationship beneficial to students, researchers, the platform and its content, and the learning analytics community. ALI is a sophisticated automated reporting system that provides an overview of sample distributions and basic analyses for researchers to consider when assessing their data. ALI's benefits can also be felt at scale through analyses that crosscut multiple studies to drive iterative platform improvements while promoting personalized learning.

Citation: Korinn Ostrow, Doug Selent, Yan Wang, Eric Van Inwegen, Neil Heffernan and Joseph Jay Williams (2016). "The Assessment of Learning Infrastructure (ALI): The Theory, Practice, and Scalability of Automated Assessment". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: B: Teaching | Polarity: | Sector: | Country:

Wheel-spinning is the phenomenon where students, in spite of repeated practice, make no progress towards mastering a skill. Prior research has shown that a considerable number of students can get stuck in the mastery learning cycle--unable to master the skill despite the affordances of the educational software. In such situations, the tutor's promise of 'infinite practice' via mastery learning becomes more a curse than a blessing. Prior research on wheel spinning overlooks two aspects: how much time is spent wheel spinning and the problem of imbalanced data. This work provides an estimate of the amount of time students spend wheel spinning. A first-cut approximation is that 24% of student time in the ASSISTments system is spent wheel spinning. However, the data used to train the wheel spinning model were imbalanced, resulting in a bias in the model's predictions causing it to undercount wheel spinning. We identify this misprediction as an issue for model extrapolation as a general issue within EDM, provide an algebraic workaround to modify the detector's predictions to better accord to reality, and show that students spend approximately 28% of their time wheel spinning in ASSISTments.

Citation: Yan Wang, Yue Gong and Joseph Beck (2016). "How Long Must We Spin Our Wheels? Analysis of Student Time and Classifier Inaccuracy". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: B: Teaching | Polarity: | Sector: | Country:

We present our approach to designing and evaluating tools that can assist teachers in classroom settings where students are using Exploratory Learning Environments (ELEs), using as our case study the MiGen system, which targets 11-14 year old students' learning of algebra. We discuss the challenging role of teachers in exploratory learning settings and motivate the need for visualisation and notification tools that can assist teachers in focusing their attention across the whole class and inform teachers' interventions. We present the design and evaluation approach followed during the development of MiGen's Teacher Assistance tools, drawing parallels with the recently proposed LATUX workflow but also discussing how we go beyond this to include a large number of teacher participants in our evaluation activities, so as to gain the benefit of different view points. We present and discuss the results of the evaluations, which show that participants appreciated the capabilities of the tools and were mostly able to use them quickly and accurately.

Citation: Manolis Mavrikis, Sergio Gutierrez Santos and Alexandra Poulovassilis (2016). "Design and Evaluation of Teacher Assistance Tools for Exploratory Learning Environments". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: B: Teaching | Polarity: | Sector: | Country:

Prerequisite skill structures have been closely studied in past years leading to many data-intensive methods aimed at refining such structures. While many of these proposed methods have yielded success, defining and refining hierarchies of skill relationships are often difficult tasks. The relationship between skills in a graph could either be causal, indicating a prerequisite relationship (skill A must be learned before skill B), or non-causal, in which the ordering of skills does not matter and may indicate that both skills are prerequisites of another skill. In this study, we propose a simple, effective method of determining the strength of pre-to-post-requisite skill relationships. We then compare our results with a teacher-level survey about the strength of the relationships of the observed skills and find that the survey results largely confirm our findings in the data-driven approach.

Citation: Seth Adjei, Anthony Botelho and Neil Heffernan (2016). "Predicting Student Performance on Post-requisite Skills Using Prerequisite Skill Data: An alternative method for refining Prerequisite Skill Structures". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: B: Teaching | Polarity: | Sector: | Country:

Given the importance of reading proficiency and habits for young students, an online e-quiz bank, Reading Battle, was launched in 2014 to facilitate reading improvement for primary-school students. With more than ten thousand questions in both English and Chinese, the system has attracted nearly five thousand learners who have made about half a million question answering records. In an effort towards delivering personalized learning experience to the learners, this study aims to discover potentially useful knowledge from learners' reading and question answering records in the Reading Battle system, by applying association rule mining and clustering analysis. The results show that learners could be grouped into three clusters based on their self-reported reading habits. The rules mined from different learner clusters can be used to develop personalized recommendations to the learners. Implications of the results on evaluating and further improving the Reading Battle system are also discussed.

Citation: Xiao Hu, Yinfei Zhang, Samuel Chu and Xiaobo Ke (2016). "Towards Personalizing An E-quiz Bank for Primary School Students: An Exploration with Association Rule Mining and Clustering ". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

Student mistakes are often not random but, rather, reflect thoughtful yet incorrect strategies. In order for educational technologies to make full use of students' performance data to estimate the knowledge of a student, it is important to model not only the conceptions but also the misconceptions that a student's particular pattern of successes and errors may indicate. The student models that drive the 'outer loop' of Intelligent Tutoring Systems typically only track positive skills and conceptions, not misconceptions. Here, we present a method of representing misconceptions in the kinds of Knowledge Component models, or Q-Matrices, that are used by student models to estimate latent knowledge. We show, in a case study on a fraction arithmetic dataset, that incorporating a misconception into the Knowledge Component model dramatically improves model fit. We also derive qualitative insights from comparing predicted learning curves across models that incorporate varying misconception-related parameters. Finally, we show that the inclusion of a misconception in a Knowledge Component model can yield individual student estimates of misconception strength. These estimates are significantly correlated with out-of-tutor individual measures of student errors indicative of the misconception.

Citation: Ran Liu, Rony Patel and Kenneth R. Koedinger (2016). "Modeling Common Misconceptions in Learning Process Data". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

This paper describes the development and evaluation of an affect-aware intelligent support component that is part of a learning environment known as iTalk2Learn. The intelligent support component is able to tailor feedback according to a student's affective state, which is deduced both from speech and interaction. The affect prediction is used to determine which type of feedback is provided and how that feedback is presented (interruptive or non-interruptive). The system includes two Bayesian networks that were trained with data gathered in a series of ecologically-valid Wizard-of-Oz studies, where the effect of the type of feedback and the presentation of feedback on students' affective states was investigated. This paper reports results from an experiment that compared a version that provided affect-aware feedback (affect condition) with one that provided feedback based on performance only (non-affect condition). Results show that students who were in the affect condition were less off-task, a result that was statistically significant. Additionally, the results indicate that students in the affect condition were less bored. The results also show that students in both conditions made learning gains that were statistically significant, while students in the affect condition had higher learning gains than those in the non-affect condition, although this result was not statistically significant in this study's sample. Taken all together, the results point to the potential and positive impact of affect-aware intelligent support.

Citation: Beate Grawemeyer, Manolis Mavrikis, Wayne Holmes, Sergio Gutierrez-Santos, Michael Wiedmann and Nikol Rummel (2016). "Affecting Off-Task Behaviour: How Affect-aware Feedback Can Improve Student Learning". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

Many pedagogical models in the field of learning analytics are implicit and do not overtly direct learner behavior. While this allows flexibility of use, this could also result in misaligned practice, and there are calls for more explicit pedagogical models in learning analytics. This paper presents an explicit pedagogical model, the Team and Self Diagnostic Learning (TSDL) framework, in the context of collaborative inquiry tasks. Key informing theories include experiential learning, collaborative learning, and the learning analytics process model. The framework was trialed through a teamwork competency awareness program for 14 year old students. A total of 272 students participated in the program. This paper foregrounds students' and teachers' evaluative accounts of the program. Findings reveal positive perceptions of the stages of the TSDL framework, despite identified challenges, which points to its potential usefulness for teaching and learning. The TSDL framework aims to provide theoretical clarity of the learning process, and foster alignment between learning analytics and the learning design. The current work provides trial outcomes of a teamwork competency awareness program that used dispositional analytics, and further efforts are underway to develop the discourse layer of the analytic engine. Future work will also be dedicated to application and refinement of the framework for other contexts and participants, both learners and teachers alike.

Citation: Elizabeth Koh, Antonette Shibani, Jennifer Pei-Ling Tan and Helen Hong (2016). "A Pedagogical Framework for Learning Analytics in Collaborative Inquiry Tasks: An Example from a Teamwork Competency Awareness Program". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: A: Learning | Polarity: | Sector: | Country:

Despite the prevalence of e-learning systems in schools, most of today's systems do not personalize educational data to the individual needs of each student. Although much progress has been made in modeling students' learning from data and predicting performance, these models have not been applied in real classrooms. This paper proposes a new algorithm for sequencing questions to students that is empirically shown to lead to better performance and engagement in real schools when compared to a baseline approach. It is based on using knowledge tracing to model students' skill acquisition over time, and to select questions that advance the student's learning within the range of the student's capabilities, as determined by the model. The algorithm is based on a Bayesian Knowledge Tracing (BKT) model that incorporates partial credit scores, reasoning about multiple attempts to solve problems, and integrating item difficulty. This model is shown to outperform other BKT models that do not reason about (or reason about some but not all) of these features. The model was incorporated into a sequencing algorithm and deployed in two schools where it was compared to a baseline sequencing algorithm that was designed by pedagogical experts. In both schools, students using the BKT sequencing approach solved more difficult questions, and with better performance than did students who used the expert-based approach. Students were also more engaged using the BKT approach, as determined by their log-ins in the system and a questionnaire. We expect our approach to inform the design of better methods for sequencing and personalizing educational content to students that will meet their individual learning needs.

Citation: Yossi Ben-David, Avi Segal and Kobi Gal (2016). "Sequencing Educational Content in Classrooms using Bayesian Knowledge Tracing". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.