Digital Assess provides support for workflow around assessment of coursework or other evidence-based assessment scenario. The system can be used for conventional assessor marking or for peer-learner assessment. Learning analytics is used to drive a process known as adaptive comparative judgement, which increases the reliability of the assessment.
Adaptive comparative judgement is a development of the assessment approach in which pairs of student work are compared, using some defined dimensions of quality. Learning analytics drives the adaptivity by automatically determining which pairs to present to which individuals undertaking the assessment, in order that the increase in the reliability of the grading is maximised in each round of comparison. Over several rounds of comparative judgement, reliability statistics are computed, as well as statistics which identify both judges and student work which are problematical. The process can also support year-on-year standardisation. The method is particularly applicable to cases where a detailed marking scheme is ill-suited to the object of assessment, for example for creative subjects or “soft skills” or would be excessively time-consuming, or where peer assessment has a pedagogic role.
Research undertaken by academics and high-stakes awarding bodies has demonstrated that adaptive comparative judgement is a reliable method, exceeding the inter-rater reliability typical of conventional essay marking.
[This synopsis originally coded by the LAEP project - Learning Analytics for European educational Policy https://laepanalytics.wordpress.com/]