Co-designing AI ethics in education

(Acknowledgements: DALL•E)

2024 here we come… The current frenzy around artificial intelligence in education was triggered just over a year ago by the explosive arrival of ChatGPT, which made the power of the most mature large language model ever developed, freely available to the masses via an engaging conversational user interface. Every level of the educational sector then spent 2023 grappling with the implications, and I’ve shared my small pieces of that puzzle in other blog posts. (For those interested, R&D in “AIED” is not new, dating back ~40 years depending on how you count.*)

The tech is advancing at a dizzying pace which can leave us disoriented, and few anticipate that 2024 will be any different. But a consistent challenge faced by every school, college and university, is to build and sustain trust that AI will be used responsibly. Easier said than done:

  • Local ethics. There are endless lists of AI ethics principles that would seem on first inspection to make sense everywhere (“fairness”, “accountability”, “transparency”, etc…) — but translation work is needed. There will be local sensitivities around how these are implemented. What do qualities like “trust” and “responsible” mean to teachers, students, parents, leaders, educational authorities? There will be commonalities for sure that translate across contexts, but building trust means taking your people on the journey, so that they can internalise what these ideas mean, bring abstract principles to life in their own language and metaphors, and tell user stories they can inhabit.
  • High quality deliberation. Moreover, the issues are complex. How do we convene informed, respectful dialogue between diverse stakeholders? Calling a ‘town hall’ for all interested risks being superficial (there’s no time to grapple with the complexities; contributions are misinformed), tokenistic (those in power have already made the decisions), or attracting only the most confident or strident voices. A brainstorming workshop provides more space to go deep, but often doesn’t involve any learning, participants may not represent the true diversity of the community, and while hugely generative of ideas, may fail to converge on tangible outcomes that actually make a difference.

Agreeing on what WE consider to be acceptable practice in OUR context can provide a sense of orientation and safety amid the turbulence — if they are then implemented of course.

In late 2021, here at UTS we set out to grapple with this, and designed the EdTech Ethics forum for the university community with these concerns in mind. We put out an initial report documenting the process and preliminary feedback in the immediate aftermath, but then did the key work of interviewing participants, analysing their feedback, followed by contributing to the university’s governance processes as it developed and published its AI Operations Policy and Procedures.

So I’m delighted to share a forthcoming journal paper documenting how we ran this, what the participants thought, and the tangible outcomes. The paper acknowledges the many people who made this possible, but special thanks to my co-authors Teresa Swist and Kal Gulson at Sydney University Education Futures Studio, who joined the project as external participants to UTS, and conducted the interviews. This work on Deliberative Democracy intersects with our collaboration around Technical DemocracyChad Foulkes from Liminal by Design was an awesome workshop session designer and facilitator, under the tricky lockdown conditions. And to Chris Riedy and Nivek Thompson (UTS Institute for Sustainable Futures) whose guidance and teaching on Leading Deliberative Democracy and Doing Deliberative Democracy started me down this road (highly recommended online micro credentials!).

Swist, T., Buckingham Shum, S. & Gulson, K. N. (2024). Co-producing AIED Ethics Under Lockdown: An Empirical Study of Deliberative Democracy in Action. International Journal of Artificial Intelligence in Education Published online: 27 Feb. 2024.

Abstract: It is widely documented that higher education institutional responses to the COVID-19 pandemic accelerated not only the adoption of educational technologies, but also associated socio-technical controversies. Critically, while these cloud-based platforms are capturing huge datasets, and generating new kinds of learning analytics, there are few strongly theorised, empirically validated processes for institutions to consult their communities about the ethics of this data-intensive, increasingly algorithmically-powered infrastructure. Conceptual and empirical contributions to this challenge are made in this paper, as we focus on the under-theorised and under-investigated phase required for ethics implementation, namely, joint agreement on ethical principles. We foreground the potential of ethical co-production through Deliberative Democracy (DD), which emerged in response to the crisis in confidence in how typical democratic systems engage citizens in decision making. This is tested empirically in the context of a university-wide DD consultation, conducted under pandemic lockdown conditions, co-producing a set of ethical principles to govern Analytics/AI-enabled Educational Technology (AAI-EdTech). Evaluation of this process takes the form of interviews conducted with students, educators, and leaders. Findings highlight that this methodology facilitated a unique and structured co-production process, enabling a range of higher education stakeholders to integrate their situated knowledge through dialogue. The DD process and product cultivated commitment and trust among the participants, informing a new university AI governance policy. The concluding discussion reflects on DD as an exemplar of ethical co-production, identifying new research avenues to advance this work. To our knowledge, this is the first application of DD for AI ethics, as is its use as an organisational sensemaking process in education.

Your thoughts welcomed in LinkedIn…

* Histories of AIED:

Doroudi, S. (2023). The Intertwined Histories of Artificial Intelligence and Education. International Journal of Artificial Intelligence in Education, 33(4), 885-928.

Pham, S. T. H., & Sampson, P. M. (2022). The development of artificial intelligence in education: A review in context. Journal of Computer Assisted Learning, 38(5), 14081421.

Woolf, B. P. (2015). AI and education: Celebrating 30 years of marriage. In C. Conati, N. Heffernan, A. Mitrovic, & M. F. Verdejo (Eds.), Artificial intelligence in education: 17th international conference, AIED 2015, Madrid, Spain, June 22–26, 2015. Proceedings (pp. 38–47). Springer International Publishing.

Comments are closed.