Learning Analytics and AI: Politics, Pedagogy and Practices

Buckingham Shum, S.J. & Luckin, R. (2019). Learning Analytics and AI: Politics, Pedagogy and Practices. British Journal of Educational Technology, 50(6), pp.2785-2793. https://doi.org/10.1111/bjet.12880 | PDF | HTML

I’m delighted to say that this BJET 50th Anniversary Special Issue is now online. The 11 contributions, from leading research teams in Learning Analytics, and Artificial Intelligence in Education (LA/AIED), provide critical, reflective accounts from researchers who are also system developers. Together, they bring a deep understanding of the design decisions, and value commitments, that underpin the emerging digital infrastructure for education.

This extract from our editorial sets out the critiques and challenges for LA/AIED to which this volume responds:

“The fears are reasonable: that quantification and autonomous systems provide a new wave of power tools to track and quantify human activity in ever higher resolution—a dream for bureaucrats, marketeers and researchers—but offer little to advance everyday teaching and learning in productive directions. This fear is justified in our post‐Snowden era of pervasive surveillance, and post‐Cambridge Analytica data breaches. Partly however, this fear is also born of lack of awareness about the diverse forms that LA/AIED take, which is equally understandable—to outsiders, these are new and opaque technologies. It follows that if we do not want to see concerned students, parents and unions protesting against AI in education, we need urgently to communicate in accessible terms what the benefits of these new tools are, and equally, how seriously the community is engaging with their potential to be used to the detriment of society.

Politics, pedagogy and practices

This special issue provides resources to tackle this challenge, by engaging with these concerns under the banner of three themes: Politics, Pedagogy and Practices:

1. The politics theme acknowledges the widespread anxiety about the ways that data, algorithms and machine intelligence are being, or could be, used in education. From international educational datasets gathered by governments and corporations, to personal apps, in a broad sense ‘politics’ infuse all information infrastructures, because they embody values and redistribute power. While applauding the contributions that science and technology studies, critical data studies and related fields are making to contemporary debates around the ethics of big data and AI, we wanted to ask, how do the researchers and developers of LA/AI tools frame their work in relation to these concerns?

2. The pedagogies theme addresses the critique from some quarters that LA/AI’s requirements to formally model skills and quantify learning processes serve to perpetuate instructivist pedagogies (eg, Wilson & Scott, 2017), branded somewhat provocatively as behaviourism (Watters, 2015). While there has clearly been huge progress in STEM‐based intelligent tutoring systems (see du Boulay, 2019; Rosé, McLaughlin, Liu, & Koedinger, 2019), what is the counter‐argument that LA/AI empowers more diverse pedagogies?

3. The practices theme sought accounts of how these technologies come into being. What design practices does one find inside LA/AI teams that engage with the above concerns? Moreover, once these tools have been deployed, what practices do educators use to orchestrate these tools in their teaching?”

[…]

“In the context of this 50th Anniversary Special Issue of the British Journal of Educational Technology, authors from a range of disciplinary backgrounds and outlooks were challenged to make the state of the art in their fields accessible to a broad audience, and to give glimpses of the road ahead to 2025. The papers are therefore primarily reflective, “big picture” narratives, reviewing and discussing existing literature and case studies, and looking forward to what could, or should, be on the horizon. Together, they provide an eclectic set of lenses for thinking about LA/AIED at a range of scales—from the macroscale of national and international policy and stakeholder networks, to the meso‐scale of institutional strategy, down to the micro‐scale of how we make cognitive models more intelligible, or design decisions more ethical.”

The abstracts and links for the 11 articles are appended below for convenience (the Editorial and Rosé, et al. paper are open access).


Ben Williamson, University of Edinburgh

Digital data are transforming higher education (HE) to be more student‐focused and metrics‐centred. In the UK, capturing detailed data about students has become a government priority, with an emphasis on using student data to measure, compare and assess university performance. The purpose of this paper is to examine the governmental and commercial drivers of current large‐scale technological efforts to collect and analyse student data in UK HE. The result is an expanding data infrastructure which includes large‐scale and longitudinal datasets, learning analytics services, student apps, data dashboards and digital learning platforms powered by artificial intelligence (AI). Education data scientists have built positive pedagogic cases for student data analysis, learning analytics and AI. The politicization and commercialization of the wider HE data infrastructure is translating them into performance metrics in an increasingly market‐driven sector, raising the need for policy frameworks for ethical, pedagogically valuable uses of student data in HE.

A social cartography of analytics in education as performative politics

Paul Prinsloo, University of South Africa

Data—their collection, analysis and use—have always been part of education, used to inform policy, strategy, operations, resource allocation, and, in the past, teaching and learning. Recently, with the emergence of learning analytics, the collection, measurement, analysis and use of student data have become an increasingly important research focus and practice. With (higher) education having access to more student data, greater variety and nuanced/granularity of data, as well as collecting and using real‐time data, it is crucial to consider the data imaginary in higher education, and, specifically, analytics as performative politics. Data and data analyses are often presented as representing “reality” and, as such, are seminal in institutional “truth‐making,” whether in the context of operational or student learning data. In the broader context of critical data studies (CDS), this social cartography examines and maps the “data frontier” and the “data gaze” within the context of the dominant narrative of evidence‐based management and the data imaginary in higher education. Following an analysis of the main assumptions in evidence‐based management and the power of metrics, this paper presents a social cartography of data analytics not only as representational, but as actant, and as performative politics.

Designing educational technologies in the age of AI: A learning sciences‐driven approach

Rosemary Luckin & Mutlu Cukurova, University College London

Interdisciplinary research from the learning sciences has helped us understand a great deal about the way that humans learn, and as a result we now have an improved understanding about how best to teach and train people. This same body of research must now be used to better inform the development of Artificial Intelligence (AI) technologies for use in education and training. In this paper, we use three case studies to illustrate how learning sciences research can inform the judicious analysis, of rich, varied and multimodal data, so that it can be used to help us scaffold students and support teachers. Based on this increased understanding of how best to inform the analysis of data through the application of learning sciences research, we are better placed to design AI algorithms that can analyse rich educational data at speed. Such AI algorithms and technology can then help us to leverage faster, more nuanced and individualised scaffolding for learners. However, most commercial AI developers know little about learning sciences research, indeed they often know little about learning or teaching. We therefore argue that in order to ensure that AI technologies for use in education and training embody such judicious analysis and learn in a learning sciences informed manner, we must develop inter‐stakeholder partnerships between AI developers, educators and researchers. Here, we exemplify our approach to such partnerships through the EDUCATE Educational Technology (EdTech) programme.

Complexity leadership in learning analytics: Drivers, challenges, and opportunities

Yi-Shan Tsai, University of Edinburgh
Oleksandra Poquet, National University of Singapore
Dragan Gašević, Monash University
Shane Dawson & Abelardo Pardo, University of South Australia

Learning analytics (LA) has demonstrated great potential in improving teaching quality, learning experience and administrative efficiency. However, the adoption of LA in higher education is often beset by challenges in areas such as resources, stakeholder buy‐in, ethics and privacy. Addressing these challenges in a complex system requires agile leadership that is responsive to pressures in the environment and capable of managing conflicts. This paper examines LA adoption processes among 21 UK higher education institutions using complexity leadership theory as a framework. The data were collected from 23 interviews with institutional leaders and subsequently analysed using a thematic coding scheme. The results showed a number of prominent challenges associated with LA deployment, which lie in the inherent tensions between innovation and operation. These challenges require a new form of leadership to create and nurture an adaptive space in which innovations are supported and ultimately transformed into the mainstream operation of an institution. This paper argues that a complexity leadership model enables higher education to shift towards more fluid and dynamic approaches for LA adoption, thus ensuring its scalability and sustainability.

Practical ethics for building learning analytics

Kirsty Kitto & Simon Knight, University of Technology Sydney

Artificial intelligence and data analysis (AIDA) are increasingly entering the field of education. Within this context, the subfield of learning analytics (LA) has, since its inception, had a strong emphasis upon ethics, with numerous checklists and frameworks proposed to ensure that student privacy is respected and potential harms avoided. Here, we draw attention to some of the assumptions that underlie previous work in ethics for LA, which we frame as three tensions. These assumptions have the potential of leading to both the overcautious underuse of AIDA as administrators seek to avoid risk, or the unbridled misuse of AIDA as practitioners fail to adhere to frameworks that provide them with little guidance upon the problems that they face in building LA for institutional adoption. We use three edge cases to draw attention to these tensions, highlighting places where existing ethical frameworks fail to inform those building LA solutions. We propose a pilot open database that lists edge cases faced by LA system builders as a method for guiding ethicists working in the field towards places where support is needed to inform their practice. This would provide a middle space where technical builders of systems could more deeply interface with those concerned with policy, law and ethics and so work towards building LA that encourages human flourishing across a lifetime of learning.

From data to personal user models for life-long, life-wide learners

Judy Kay & Kummerfeld, University of Sydney

As technology has become ubiquitous in learning contexts, there has been an explosion in the amount of learning data. This creates opportunities to draw on the decades of learner modelling research from Artificial Intelligence in Education and more recent research on Personal Informatics. We use these bodies of research to introduce a conceptual model for a Personal User Model for Life‐long, Life‐wide Learners (PUMLs). We use this to define a core set of system competency questions. A successful PUML and its interface must enable a learner to answer these by scrutinising their PUML, aided by its scaffolding interfaces. We aim to give learners both control over their own learning data and the means to harness that data for the important metacognitive processes of self‐monitoring, reflection and planning. We conclude with a set of design guidelines for creating PUMLs. Our core contribution is a way to think about the design and evaluation of learning data and applications so that they give learner control and agency beyond simple data access and algorithmic transparency.

Supporting and challenging learners through pedagogical agents who know their learner: Addressing ethical issues through designing for values

Deborah Richards, Macquarie University
Virginia Dignum, Umea Universitet Teknisk-Naturvetenskaplig Fakultet; Technische Universiteit Delft

Pedagogical Agents (PAs) that would guide interactions in intelligent learning environments were envisioned two decades ago. These early animated characters had been shown to deliver learning benefits. However, little was understood regarding what aspects were beneficial for learning and what sort of learning PAs were suitable for. This article considers the current and future use of PAs to support and challenge learners from three perspectives. Firstly, we look at PAs from a practical perspective to consider what Intelligent Virtual Agents are, the roles they play in education and beyond and the underlying technologies and theories driving them. Next we take a pedagogical perspective to consider the vision, pedagogical approaches supported and new possible uses of PAs. This leads us to the political perspective to consider the values, ethics and societal impacts of PAs. Drawing all three perspectives together we present a design for values approach to designing ethical and socially responsible PAs.

Escape from the Skinner Box: The case for contemporary intelligent learning environments

Ben du Boulay, University of Sussex

Intelligent Tutoring systems (ITSs) and Intelligent Learning Environments (ILEs) have been developed and evaluated over the last 40 years. Recent meta‐analyses show that they perform well enough to act as effective classroom assistants under the guidance of a human teacher. Despite this success, they have been criticised as embodying a retrograde behaviourist technology. They have also been caught up in broader controversies about the role of Artificial Intelligence in society and about the entry of big data companies into the education market and the harvesting of learner data. This paper concentrates on rebutting the criticisms of the pedagogy of ITSs and ILEs. It offers examples of how a much wider range of pedagogies are available than their critics claim. These wider pedagogies operate at both the screen level of individual systems, as well as at the classroom level within which the systems are orchestrated by the teacher. It argues that there are many ways that such systems can be integrated by the teacher into the overall experience of a class. Taken together, the screen‐level and orchestration‐level dramatically enlarge the range of pedagogies beyond what was possible with the “Skinner Box.”

Intelligent analysis and data visualisation for teacher assistance tools: The case of exploratory learning

Manolis Mavrikis & Eirini Geraniou, University College London
Sergio Gutierrez Santos & Alexandra Poulovassilis, Birkbeck, University of London

While it is commonly accepted that Learning Analytics (LA) tools can support teachers’ awareness and classroom orchestration, not all forms of pedagogy are congruent to the types of data generated by digital technologies or the algorithms used to analyse them. One such pedagogy that has been so far underserved by LA is exploratory learning, exemplified by tools such as simulators, virtual labs, microworlds and some interactive educational games. This paper argues that the combination of intelligent analysis of interaction data from such an Exploratory Learning Environment (ELE) and the targeted design of visualisations has the benefit of supporting classroom orchestration and consequently enabling the adoption of this pedagogy to the classroom. We present a case study of LA in the context of an ELE supporting the learning of algebra. We focus on the formative qualitative evaluation of a suite of Teacher Assistance tools. We draw conclusions relating to the value of the tools to teachers and reflect with transferable lessons for future related work.

Explanatory learner models: Why machine learning (alone) is not the answer

Carolyn P. Rosé & Elizabeth A. McLaughlin, Carnegie Mellon University
Ran Liu, MARi, LLC
Kenneth R. Koedinger, Carnegie Mellon University

Using data to understand learning and improve education has great promise. However, the promise will not be achieved simply by AI and Machine Learning researchers developing innovative models that more accurately predict labeled data. As AI advances, modeling techniques and the models they produce are getting increasingly complex, often involving tens of thousands of parameters or more. Though strides towards interpretation of complex models are being made in core machine learning communities, it remains true in these cases of “black box” modeling that research teams may have little possibility to peer inside to try understand how, why, or even whether such models will work when applied beyond the data on which they were built. Rather than relying on AI expertise alone, we suggest that learning engineering teams bring interdisciplinary expertise to bear to develop explanatory learner models that provide interpretable and actionable insights in addition to accurate prediction. We describe examples that illustrate use of different kinds of data (eg, click stream and discourse data) in different course content (eg, math and writing) and toward different goals (eg, improving student models and generating actionable feedback). We recommend learning engineering teams, shared infrastructure and funder incentives toward better explanatory learner model development that advances learning science, produces better pedagogical practices and demonstrably improves student learning.

The heart of educational data infrastructures—Conscious humanity and scientific responsibility, not infinite data and limitless experimentation

Petr Johanes & Candace Thille, Stanford University

Education and education research are experiencing increased digitization and datafication, partly thanks to the rise in popularity of massively open online courses (MOOCs). The infrastructures that collect, store and analyse the resulting big data have received critical scrutiny from sociological, epistemological, ethical and analytical perspectives. These critiques tend to highlight concerns and/or warnings about the lack of the infrastructures’ and builders’ understanding of various nontechnical aspects of big data research (eg seeing data as neutral rather than as products of social processes). These critiques have primarily come from outside of the builder community, rendering the conversation largely one‐sided and devoid of the voices of the builders themselves. The purpose of this paper is to re‐balance the conversation by reporting the results of interviews with 11 data infrastructure builders in higher education institutions. The interviews reveal that builders engage deeply with the issues the critiques outline, not only thinking about them, but also developing practices to address them. The paper focuses the findings on three themes: designing a productive science, navigating ubiquitous ethics and achieving real human impact. Researchers, policymakers and infrastructure builders can use these accounts to better understand the building process and experience.

Leave a Reply

You can use these XHTML tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>