Embracing Imperfection in Learning Analytics

These days, it doesn’t often take >2 years to write a conference paper, except because you’re too busy. But it’s enjoyable when it’s because you’re wrestling with colleagues with some difficult ideas, trying to craft a story you feel must be told, developing different kinds of argument which you erect and demolish in turn. This is that paper. Thanks Kirsty and Andrew!

Kirsty Kitto, Simon Buckingham Shum, and Andrew Gibson. (2018 in press). Embracing Imperfection in Learning Analytics. In Proceedings LAK18: International Conference on Learning Analytics and Knowledge, March 5–9, 2018, Sydney, NSW, Australia, pp.451-460. (ACM, New York, NY, USA). https://doi.org/10.1145/3170358.3170413 [Open Access reprint]

Learning Analytics (LA) sits at the confluence of many contributing disciplines, which brings the risk of hidden assumptions inherited from those fields. Here, we consider a hidden assumption derived from computer science, namely, that improving computational accuracy in classification is always a worthy goal. We demonstrate that this assumption is unlikely to hold in some important educational contexts, and argue that embracing computational “imperfection” can improve outcomes for those scenarios. Specifically, we show that learner-facing approaches aimed at “learning how to learn” require more holistic validation strategies. We consider what information must be provided in order to reasonably evaluate algorithmic tools in LA, to facilitate transparency and realistic performance comparisons.

Leave a Reply

You can use these XHTML tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>