Blog

AI: to take advantage of the game changing benefits, we first need to address the glaring concerns

Graham Purves and Rebecca Snell

Vice Principal & Head of Senior School at The Grammar School at Leeds and Educational Consultant & Research Officer, in the Education Department at the University of Oxford

Read the blog

AI is what a Furby was to a child in 1998: it’s alluring, exciting, and complex. Filled with promise – unlike the robotic hamster/owl – everyone can innovate even the youngest members of a school. When armed with a tablet children can create their own zoo, learn the basics of physics to mastermind their own rockets, or create immersive art to share with their class.

We cannot deny, nor would we wish to, that it feels that we are on the brink of something genuinely exciting which will further empower students in their learning, stream-line school processes, increase individual productivity, and open opportunities for personalization that we simply could not imagine only a year ago.

Exactly what makes AI so exciting in terms of its wide scale benefits, unfortunately also makes it so ripe for misuse and abuse, and ensuring that we are active in protecting the children within our care must be our priority.

The Guardian flagged back in May 2023 (AI tools could be used by predators to ‘automate child grooming’, eSafety commissioner warns) the concerns from the children’s commissioner around the concerns of the use of AI to automate child grooming. This is not something of the future: it is of the present. To exemplify this, Convai is a free to access website that offers, Learning, training and socializing in an immersive and interactive world with AI characters. It is cool. There are few other words to describe it: you can create an avatar of pretty much anything that suits your fancy: there are quirky presets including cyborgs and superheroes, but you can also upload a photo – any photo – and it creates an avatar. From here, the user-friendly experience allows you to upload training data (the information that creates the knowledge bank for the chat) and this can be anything: textbooks, newspaper articles, online forums. Convai then draws on this knowledge as you enter into their self-described ‘human like chat’. The user defines its age, interests, dislikes, and even the slang it will use. The resulting conversations are utterly realistic.

Tools such as Convai are hugely exciting and offer real scope for training and learning opportunities, but malleable to age, context, and interests, this and other similar tools create a real challenge when considering how best to protect those within our care.

Reassuringly, popular and mainstream AI systems tell us that AI use for grooming purposes is not something we should be concerned about, ‘as of September 2021 there is no capacity to do this’; however, we decided to see how easy it was to use AI to create a believable and realistic human dialogue. We created an avatar, let’s call him Harry:  Harry is a fourteen-year-old boy with a love of science, a keen interest in computer games and a relative disdain for his peers. Harry has a confidence beyond his years and now is feeling alienated by those in his year group but is keen to have friends. Using transcripts from computer game online forums, speech and reading capabilities for those of 14, and the content of a well-known revision website; Harry can easily offer dialogue that is completely plausible. We supplied our avatar with relatively simple training data, and spent only a matter of minutes guiding the style of his speech and the result was an alarmingly believable conversation about Call of Duty, irritating classmates who are just too immature, and teachers who just weren’t up to scratch.

 

 

Interestingly, despite the AI system telling us that the creation of material that could be used for grooming was not an area of concern, it was very helpful in sign posting what relevant training data was needed to create an effective human-like chatbot.

This is absolutely not a reason to disengage and vilify these technologies, it is an argument for deepening and sharing our understanding and the knowledge we hold around both the benefits and risks, with pupils, parents, and colleagues in schools. There seems to be something of a vacuum in the official guidance in these areas. It is interested individuals, schools and groups who are stepping up to explore the benefits, and also flag the concerns, but this needs a stronger and more unified response.

AI is not going to go away and nor would I wish it to. We have to educate our communities on how to take advantage of these amazing tools which can and will, hugely enhance their learning and their lives. We have an equal duty to educate the whole school community, inclusive of parents of the very real risk that is facing all of us at present.

All schools need to ask themselves how we will avoid becoming antiquated, out of touch educators, and ensure that we are educating our young people for their future, while also ensuring that we address the safeguarding requirements in the modern and changing world.  This is absolutely not about throwing away all of our current, excellent practice.  It is instead about seeing how we can enhance and safeguard our existing practice.

We, as a body of teachers and leaders, are not the experts in AI, and its rapidly developing capabilities, but we are the experts in safeguarding and educating children. We must ensure that we are not allowing ourselves to sleepwalk into an all too predictable future, but instead we are embracing it actively and training ourselves and our colleagues and pupils to use and master these tools for good.

Written by Graham Purves, Vice Principal & Head of Senior School at The Grammar School at Leeds and Rebecca Snell, Educational Consultant & Research Officer, in the Education Department at the University of Oxford 

Date

24 January 2024

Share