Understanding skilled use of open automated feedback tools as teacher feedback literacy

Summary: a new paper forges a bridge between data-driven, open automated feedback platforms, and teacher feedback literacy competences: 

Buckingham Shum, S., Lim, L.-A., Boud, D., Bearman, M. & Dawson, P. (2023). A comparative analysis of the skilled use of automated feedback tools through the lens of teacher feedback literacy. International Journal of Educational Technology in Higher Education, 20:40 (12 July 2023). https://doi.org/10.1186/s41239-023-00410-9 

The mass availability of generative AI continues to reshape thinking about the future of work and learning. Conversational apps can now give instant feedback to learners about their work — but the educational question is how effective this interaction is. A new design space has opened up for tuning generative AI to give high quality feedback to learners about their work. We are not in uncharted waters here: there is a growing body of knowledge on what “effective feedback” means in higher education, and how to create the conditions for this. It goes far beyond comments accompanying an assignment, with a shift towards “feedback rich ecosystems” in which both teachers and students exercise far greater agency and sensemaking competencies.

In 2019, an exciting book came out: The Impact of Feedback in Higher Education: Improving Assessment Outcomes for Learners (Eds. Henderson, Ajjawi, Boud & Molloy):

“This book asks how we might conceptualise, design for and evaluate the impact of feedback in higher education. Ultimately, the purpose of feedback is to improve what students can do: therefore, effective feedback must have impact. Students need to be actively engaged in seeking, sense-making and acting upon any information provided to them in order to develop and improve. Feedback can thus be understood as not just the giving of information, but as a complex process integral to teaching and learning in which both teachers and students have an important role to play. The editors challenge us to ask two fundamental questions: when does feedback make a difference, and how can we recognise that impact?”

In 2020, I conceived a symposium to bring the editors and authors to UTS to spend 2 days in dialogue with CIC and other researchers developing automated-feedback tools using Learning Analytics/AI. We called for a deeper dialogue between researchers in the design of assessment and feedback in higher education, and researchers developing automated-feedback tools using Learning Analytics/AI. The pandemic shifted this online, but the goals remained the same, and moving online enabled us to more easily bring in additional participants, resulting in DAFFI 2020: Designing Automated Feedback for Impact whose presentations I commend to you.

I’m now delighted to share one of the fruit from this, a collaboration between CIC (Lisa Lim and myself) and our colleagues at Deakin University’s Centre for Research in Assessment and Digital Learning (CRADLE). The focus of the paper is not on generative, conversational AI (which did not exist when we started this work), but on technically less complicated, but correspondingly far more transparent platforms that use simple rules authored by teachers themselves.

“In contrast to closed AF tools, we define open” AF tools as enabling the educator to specify some or all of the following key parameters in the tool’s behaviour:

  1. the student activity data that the system analyses;

  2. the algorithms that analyse that data;

  3. the feedback information the teacher wishes the software to compile for students;

  4. the modalities via which feedback information is communicated by teachers;

  5. the student-driven feedback processes that are afforded.”

What does it mean to do this skillfully? We demonstrate that Boud & Dawson’s  teacher feedback literacy competency framework can be applied very usefully to analysing teaching practices with data-driven, automated feedback platforms. A next step will be to think through what this means for tuning large language models for educational contexts.

A comparative analysis of the skilled use of automated feedback tools through the lens of teacher feedback literacy

Simon Buckingham Shuma, Lisa-Angelique Lima, David Bouda,b,c, Margaret Bearmanb, Phillip Dawsonb

a University of Technology Sydney, AUS
b Deakin University, AUS
c Middlesex University, UK

Effective learning depends on effective feedback, which in turn requires a set of skills, dispositions and practices on the part of both students and teachers which have been termed feedback literacy. A previously published teacher feedback literacy competency framework has identified what is needed by teachers to implement feedback well. While this framework refers in broad terms to the potential uses of educational technologies, it does not examine in detail the new possibilities of automated feedback (AF) tools, especially those that are open by offering varying degrees of transparency and control to teachers. Using analytics and artificial intelligence, open AF tools permit automated processing and feedback with a speed, precision and scale that exceeds that of humans. This raises important questions about how human and machine feedback can be combined optimally and what is now required of teachers to use such tools skillfully. The paper addresses two research questions: Which teacher feedback competencies are necessary for the skilled use of open AF tools? and What does the skilled use of open AF tools add to our conceptions of teacher feedback competencies? We conduct an analysis of published evidence concerning teachers’ use of open AF tools through the lens of teacher feedback literacy, which produces summary matrices revealing relative strengths and weaknesses in the literature, and the relevance of the feedback literacy framework.  We conclude firstly, that when used effectively, open AF tools exercise a range of teacher feedback competencies. The paper thus offers a detailed account of the nature of teachers’ feedback literacy practices within this context. Secondly, this analysis reveals gaps in the literature, signalling opportunities for future work. Thirdly, we propose several examples of automated feedback literacy, that is, distinctive teacher competencies linked to the skilled use of open AF tools.

Your comments most welcome

Comments are closed.