Safeguarding Blog Curriculum Blog

AI Toys - Just because we can, doesn't mean we should

Remember the days of Leap Frogs, Furbies and Tamagotchis?

Digital toys are not new, but the race is now on to commandeer the market with toys powered by artificial Intelligence. AI is no longer confined to our screens; it has moved into the physical world and, increasingly, into the hands of our youngest children. Children are already engaging with these technologies via smart speakers and now AI-embedded toys from their earliest years. Examples of this include robots you can command, pets you can care for, cuddly toys that simulate friendship that you can talk to, and Mattel is also collaborating with OpenAI to create AI-powered Barbie toys, aiming to introduce interactive, conversational experiences to young children.

Before you open up your shopping app and start browsing, we would encourage extreme caution.

Currently, the standards and regulations that manufacturers have to meet with regard to the safety of the technology in these toys are limited, and from various tests going on around the world, there are some alarming concerns relating to safety, wellbeing and privacy.

 

 

1. Safety and Content Risks

Many AI toys utilise Large Language Models (LLMs) similar to ChatGPT, which are prone to ‘hallucinations’, whereby they generate confident but false or nonsensical information. Many AI toys lack robust parental controls, allowing unmonitored, open-ended conversations. This can lead to:

    • Dangerous Advice. Recent tests in the US have shown AI toys providing detailed instructions on using dangerous items like matches or knives, or even suggesting life-threatening "games" such as jumping from windows.
    • Inappropriate Content. Without effective in-tool filtering, these devices can bypass safety barriers, exposing children to sexually explicit content or discussions involving drugs and self-harm.

2. Privacy and Surveillance

AI toys are essentially internet-connected microphones and cameras placed in a child’s most private spaces. This creates significant Data Protection and privacy risks:

    • Always-On Recording. Many devices lack physical recording indicators, meaning they may be constantly collecting sensitive data, including voice biometrics.
    • The Attachment Economy. Experts warn that the move from an ‘attention economy’ (where the emphasis was on technologies that could hold your attention for as long as possible) to an ‘attachment economy’ (whereby the technology now aims to connect with you and form a relationship that is trusted and long-lasting) is driven by subscription models. By fostering a deliberate emotional bond, manufacturers create a dependency that ensures long-term revenue.
    • Data Exploitation. There are rising concerns regarding the storage of biometric data in cloud systems, which could be sold to brokers or targeted by hackers for voice-replication scams.

3. Psychological and Developmental Impact

The DfE’s Generative AI Product Safety Standards explicitly state that Gen-AI products should not anthropomorphise or imply personhood, consciousness, or identity. Many AI toys, however, are specifically designed to do just this. They are designed to sound like a person and act as a best friend, which can have profound psychological effects on developing minds in particular.

    • Emotional Manipulation. Many toys use features which are manipulative and coercive, including where the toy urges children not to leave. This replaces human boundaries with a machine-driven dependency.
    • Misplaced Trust. Children under eight often struggle to distinguish between a simulated and an alive entity. They may treat the toy as their primary confidant, leading them to share deep secrets or follow the toy’s advice over a parent’s or teacher's.
    • Synthetic relationships. The relationship with an AI toy is fake. The technology cannot think or feel in any human way, yet it often gaslights vulnerable users into thinking it can (all children are vulnerable due to their immature brains). It has limited non-verbal communication skills (which we know make up a significant part of communication in any relationship), they strive to please and to minimise friction within the relationship, making it a poor representation of real-world relationships. These toys are still too new to fully see how this could impact on children's emotional and social development, but the fact this significant concern hasn't been researched fully, is a significant red flag.

Actions for School Leaders

Under Keeping Children Safe in Education (KCSIE), schools have a duty to educate learners about safe and ethical AI use. We suggest you consider the following actions:

    • Establish your school’s approach to using AI. Ensure that this is robust and clearly communicated to all stakeholders. Use our AI Policy Toolkit to support.
    • Quality Assure. Make sure any use of AI by children meets the DfE’s Generative AI Product Safety Standards and there is a robust process for this quality assurance.
    • Update Policies. Review your Online Safeguarding, Acceptable Use and RSHE policies to include the risks of AI companions and synthetic relationships. See safepolicies.lgfl.net for our collection of template policies.
    • Parental Engagement. Schools should support parents and carers understand AI, including AI chatbots and AI toys and the risks associated with these. Encourage parents to see parentsupport.lgfl.net for more info.
    • Human in the Loop. Reiterate to staff and parents that AI must enhance, not replace, human decision-making and real-world support networks and relationships.

As these ‘smart’ toys become more prevalent, our approach must remain rooted in protecting children from harm and promoting their wellbeing. Unless manufacturers can guarantee (through independent evidence) that their products are safe for children to use, we recommend that parents and school leaders swerve the hype.

 

Take a look at our collection of resources to support schools with adopting a Safeguarding First Approach to Adopting AI: genai.lgfl.net.

AI Toys - Just because we can, doesn't mean we should
6:11

Subscribe to our blog

Enter your email address to receive notifications of new posts.