Generative AI isn’t coming to schools; it’s already here, and we must respond. Plenty of children are already exploring tools, many teachers are dabbling, and the Department for Education has produced guidance on using them in schools.
However, talk of AI is frequently generating these 2 polar responses - stopping and hoping it goes away, or jumping in excitedly and using tools. Our view is that neither of these is the right approach.
Ignoring Gen AI ignores the fact that a large proportion of children are now using these tools and are exposed to the array of harms that accompany them. An Internet Matters survey suggests 64% of 9 - 17 yr olds are using chatbots. Therefore, doing nothing should not be a choice.
Similarly, whilst these "shiny new tools" are exciting and make grandiose promises of revolutionising education, let's not throw the baby out with the bathwater. With all we know about child development, safeguarding and PedTech, we should be asking some fundamental questions like:
Why would we use these tools?
How do they fit with your pedagogy?
How can they support your school development plan and children's outcomes?
There are significant risks of harm from using Gen AI tools (see our videos and infographic for more info on this), and the risks around cognitive offloading, data protection incidents and intellectual property breaches are not just possible, but highly likely, without clear thought, planning and human oversight.
Children's welfare must remain paramount when considering, planning for or using any Gen AI platform in school - even if it is just staff using it.
The DfE Standards help us see what capabilities and features we should expect of Gen AI tools that we use in schools. In our view, these are correct in their grounding in the protection of children. These include protections around:
Security
Data Protection
Intellectual Property
Filtering
Monitoring
Reporting
Mental Health
Emotional and social development
Cognitive development
Manipulation
And the truth is most AI tools in their current form are not compliant with these standards, and schools should think carefully about permitting any access to children - and that's the messaging we should be giving parents too.
Remember, there is no government body checking tools before they are developed. No current safety regulations that those developing products must follow to ensure their products are safe for children. Parents, carers and those working with pupils must remain sceptical and cautious.
That's not to say some Gen AI platforms and tools won't be compliant with the standards, but right now, using Gen AI with pupils is a significant risk.
Many children (yes, even in Primary) are already using AI. Nothing is stopping them from using many Gen AI tools and platforms if they have access to an internet-enabled device at home, and these products are designed to be intriguing, helpful and fun, so of course, just like adults, children are curious and exploring them.
So, although we may not yet permit the use of Gen AI for children in school, conversations and learning about it must be incorporated into the curriculum now. This is to help protect children using AI, but also to upskill them with knowledge of how these products work and the many and varied implications associated with their use. Take a look at the RSHE Guidance to see how basic principles of AI safety, in particular, need to be incorporated into this area of the curriculum. But also think more widely, AI knowledge can be incorporated into many (perhaps all) areas of the curriculum, and this is likely to have a more positive impact on pupils.
But the work doesn't stop there. We need to be clear with staff about their own use of Gen AI for work purposes. Have you asked your staff the following?
❓What Gen AI tools are you using for school work?
❓What do you use it for?
❓What account do you log in with?
❓What data are you inputting?
❓Is the tool being used for learning from the data inputted?
Based on the answers to the above, leaders then need to decide on the following:
Which Tools?
Why would staff use Gen AI? What impact does it have? Which platforms would be best for this? How do you decide which tools to use? Who is responsible for approval? Has a Data Protection Impact Assessment been completed?
Update AUPs and ensure clear written guidance is shared and discussed with staff about what they can and can't do with Gen AI. For example:
Digital infrastructure, systems and support
Filtering:
Our recommendation is to block all Gen AI tools by default for everyone, then allow any specific approved tools for specific users. Do not allow the AI category for users. Schools should have user authentication and HTTPS decryption in place to help with this. Our safefiltering page has more support on this.
Monitoring:
What captures are you seeing on your monitoring system, and what does this tell you about how children/staff are using Gen AI? What are the implications of this? The DSL should be regularly exploring the monitoring system to use this data.
LGfL provides free monitoring to LGfL Broadband customers*. We can also provide monitoring to Non-LGfL Broadband customers at a discounted rate!
Take a look here for more information: monitoring.lgfl.net
In our latest webinar, Alex Dave from LGfL - SuperCloud explores why schools must adopt a "safeguarding first" approach. We dive into the risks of "nudifying" apps, data privacy breaches, and the emotional manipulation of chatbots.
👉 Watch the recording and download many helpful resources, including our 10-question vendor QA guide: 🔗 https://genai.lgfl.net
*This is subject to LGfL’s terms and may change over time