Archives

Type: Evidence | Proposition: C: Uptake | Polarity: | Sector: | Country:

We designed and studied an innovative semantic visual learning analytics for orchestrating today's programming classes. The visual analytics integrates sources of learning activities by their content semantics. It automatically processs paper-based exams by associating sets of concepts to the exam questions. Results indicated the automatic concept extraction from exams were promising and could be a potential technological solution to address a real world issue. We also discovered that indexing effectiveness was especially prevalent for complex content by covering more comprehensive semantics. Subjective evaluation revealed that the dynamic concept indexing provided teachers with immediate feedback on producing more balanced exams.

Citation: Sharon Hsiao, Sesha Kumar Pandhalkudi Govindarajan and Yiling Lin (2016). "Semantic Visual Analytics for Today's Programming Courses". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: C: Uptake | Polarity: | Sector: | Country:

This paper explores a longitudinal approach to combining engagement, performance and social connectivity data from a MOOC using the framework of exponential random graph models (ERGMs). The idea is to model the social network in the discussion forum in a given week not only using performance (assignment scores) and overall engagement (lecture and discussion views) covariates within that week, but also on the same person-level covariates from adjacent previous and subsequent weeks. We find that over all eight weekly sessions, the social networks constructed from the forum interactions are relatively sparse and lack the tendency for preferential attachment. By analyzing data from the second week, we also find that individuals with higher performance scores from current, previous, and future weeks tend to be more connected in social network. Engagement with lectures had significant but sometimes puzzling effects on social connectivity. However, the relationships between social connectivity, performance, and engagement weakened over time, and results were not stable across weeks.

Citation: Mengxiao Zhu, Yoav Bergner, Yan Zhang, Ryan Baker, Yuan Wang and Luc Paquette (2016). "Longitudinal Engagement, Performance, and Social Connectivity: a MOOC Case Study Using Exponential Random Graph Models ". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: C: Uptake | Polarity: | Sector: | Country:

Researchers invested in K-12 education struggle not just to enhance pedagogy, curriculum, and student engagement, but also to harness the power of technology in ways that will optimize learning. Online learning platforms offer a powerful environment for educational research at scale. The present work details the creation of an automated system designed to provide researchers with insights regarding data logged from randomized controlled experiments conducted within the ASSISTments TestBed. The Assessment of Learning Infrastructure (ALI) builds upon existing technologies to foster a symbiotic relationship beneficial to students, researchers, the platform and its content, and the learning analytics community. ALI is a sophisticated automated reporting system that provides an overview of sample distributions and basic analyses for researchers to consider when assessing their data. ALI's benefits can also be felt at scale through analyses that crosscut multiple studies to drive iterative platform improvements while promoting personalized learning.

Citation: Korinn Ostrow, Doug Selent, Yan Wang, Eric Van Inwegen, Neil Heffernan and Joseph Jay Williams (2016). "The Assessment of Learning Infrastructure (ALI): The Theory, Practice, and Scalability of Automated Assessment". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: B: Teaching | Polarity: | Sector: | Country:

Journal writing is an important and common reflective practice in education. Students' reflection journals also offer a rich source of data for formative assessment. However, the analysis of the textual reflections in class of large size presents challenges. Automatic analysis of students' reflective writing holds great promise for providing adaptive real time support for students. This paper proposes a method based on topic modeling techniques for the task of themes exploration and reflection grade prediction. We evaluated this method on a sample of journal writings from pre-service teachers. The topic modeling method was able to discover the important themes and patterns emerged in students' reflection journals. Weekly topic relevance and word count were identified as important indicators of their journal grades. Based on the patterns discovered by topic modeling, prediction models were developed to automate the assessing and grading of reflection journals. The findings indicate the potential of topic modeling in serving as an analytic tool for teachers to explore and assess students' reflective thoughts in written journals.

Citation: Ye Chen, Bei Yu, Xuewei Zhang and Yihan Yu (2016). "Topic Modeling for Evaluating Students' Reflective Writing: a Case Study of Pre-service Teachers' Journals ". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: B: Teaching | Polarity: | Sector: | Country:

Wheel-spinning is the phenomenon where students, in spite of repeated practice, make no progress towards mastering a skill. Prior research has shown that a considerable number of students can get stuck in the mastery learning cycle--unable to master the skill despite the affordances of the educational software. In such situations, the tutor's promise of 'infinite practice' via mastery learning becomes more a curse than a blessing. Prior research on wheel spinning overlooks two aspects: how much time is spent wheel spinning and the problem of imbalanced data. This work provides an estimate of the amount of time students spend wheel spinning. A first-cut approximation is that 24% of student time in the ASSISTments system is spent wheel spinning. However, the data used to train the wheel spinning model were imbalanced, resulting in a bias in the model's predictions causing it to undercount wheel spinning. We identify this misprediction as an issue for model extrapolation as a general issue within EDM, provide an algebraic workaround to modify the detector's predictions to better accord to reality, and show that students spend approximately 28% of their time wheel spinning in ASSISTments.

Citation: Yan Wang, Yue Gong and Joseph Beck (2016). "How Long Must We Spin Our Wheels? Analysis of Student Time and Classifier Inaccuracy". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: B: Teaching | Polarity: | Sector: | Country:

We present our approach to designing and evaluating tools that can assist teachers in classroom settings where students are using Exploratory Learning Environments (ELEs), using as our case study the MiGen system, which targets 11-14 year old students' learning of algebra. We discuss the challenging role of teachers in exploratory learning settings and motivate the need for visualisation and notification tools that can assist teachers in focusing their attention across the whole class and inform teachers' interventions. We present the design and evaluation approach followed during the development of MiGen's Teacher Assistance tools, drawing parallels with the recently proposed LATUX workflow but also discussing how we go beyond this to include a large number of teacher participants in our evaluation activities, so as to gain the benefit of different view points. We present and discuss the results of the evaluations, which show that participants appreciated the capabilities of the tools and were mostly able to use them quickly and accurately.

Citation: Manolis Mavrikis, Sergio Gutierrez Santos and Alexandra Poulovassilis (2016). "Design and Evaluation of Teacher Assistance Tools for Exploratory Learning Environments". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: B: Teaching | Polarity: | Sector: | Country:

Prerequisite skill structures have been closely studied in past years leading to many data-intensive methods aimed at refining such structures. While many of these proposed methods have yielded success, defining and refining hierarchies of skill relationships are often difficult tasks. The relationship between skills in a graph could either be causal, indicating a prerequisite relationship (skill A must be learned before skill B), or non-causal, in which the ordering of skills does not matter and may indicate that both skills are prerequisites of another skill. In this study, we propose a simple, effective method of determining the strength of pre-to-post-requisite skill relationships. We then compare our results with a teacher-level survey about the strength of the relationships of the observed skills and find that the survey results largely confirm our findings in the data-driven approach.

Citation: Seth Adjei, Anthony Botelho and Neil Heffernan (2016). "Predicting Student Performance on Post-requisite Skills Using Prerequisite Skill Data: An alternative method for refining Prerequisite Skill Structures". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: B: Teaching | Polarity: | Sector: | Country:

Computer-based writing systems have been developed to provide students with instruction and deliberate practice on their writing. While generally successful in providing accurate scores, a common criticism of these systems is their lack of personalization and adaptive instruction. In particular, these systems tend to place the strongest emphasis on delivering accurate scores, and therefore, tend to overlook additional variables that may ultimately contribute to students' success, such as their affective states during practice. This study takes an initial step toward addressing this gap by building a predictive model of students' affect using information that can potentially be collected by computer systems. We used individual difference measures, text features, and keystroke analyses to predict engagement and boredom in 132 writing sessions. The results from the current study suggest that these three categories of features can be utilized to develop models of students' affective states during writing sessions. Taken together, features related to students' academic abilities, text properties, and keystroke logs were able to more than double the accuracy of a classifier to predict affect. These results suggest that information readily available in compute-based writing systems can inform affect detectors and ultimately improve student models within intelligent tutoring systems.

Citation: Laura Allen, Caitlin Mills, Matthew Jacovina, Scott Crossley, Sidney D'Mello and Danielle McNamara (2016). "Investigating Boredom and Engagement during Writing Using Multiple Sources of Information: The Essay, The Writer, and Keystrokes". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: B: Teaching | Polarity: | Sector: | Country:

Little is known about the processes institutions use when discerning their readiness to implement learning analytics. This study aims to address this gap in the literature by using survey data from the beta version of the Learning Analytics Readiness Instrument (LARI) [1]. Twenty-four institutions were surveyed and 560 respondents participated. Five distinct factors were identified from a factor analysis of the results: Culture; Data Management Expertise; Data Analysis Expertise; Communication and Policy Application; and, Training. Data were analyzed using both the role of those completing the survey and the Carnegie classification of the institutions as lenses. Generally, information technology professionals and institutions classified as Research Universities-Very High research activity had significantly different scores on the identified factors. Working within a framework of organizational learning, this paper details the concept of readiness as a reflective process, as well as how the implementation and application of analytics should be done so with ethical considerations in mind. Limitations of the study, as well as next steps for research in this area, are also discussed.

Citation: Meghan Oster, Steven Lonn, Matthew Pistilli and Michael Brown (2016). "The Learning Analytics Readiness Instrument". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.

Type: Evidence | Proposition: B: Teaching | Polarity: | Sector: | Country:

In this paper we discuss the results of a study of students' academic performance in first year general education courses. Using data from 566 students who received intensive academic advising as part of their enrollment in the institution's pre-major/general education program, we investigate individual student, organizational, and disciplinary factors that might predict a students' potential classification in an Early Warning System as well as factors that predict improvement and decline in their academic performance. Disciplinary course type (based on Biglan's [7] typology) was significantly related to a student's likelihood to enter below average performance classifications. Students were the most likely to enter a classification in fields like the natural science, mathematics, and engineering in comparison to humanities courses. We attribute these disparities in academic performance to disciplinary norms around teaching and assessment. In particular, the timing of assessments played a major role in students' ability to exit a classification. Implications for the design of Early Warning analytics systems as well as academic course planning in higher education are offered.

Citation: Michael Brown, R. Matthew Demonbrun, Steven Lonn, Stephen Aguilar and Stephanie Teasley (2016). "What and When: The Role of Course Type and Timing in Students' Academic Performance". In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York.