Helping young people use AI safely
Artificial intelligence is now part of everyday life for many teenagers (and adults!). It can help with revision, explain tricky ideas, support creativity, generate images, improve writing and help young people explore new interests. Used well, it can be a useful tool. We would like to support you, as parents and carers to help young people use it safely, wisely and kindly.
The NSPCC (https://learning.nspcc.org.uk/research-resources/2025/generative-ai-childrens-safety) warns that generative AI can also bring risks, including fake images, deepfakes, bullying, grooming, sextortion, harmful advice, misinformation and content that may upset or exploit children. Teenagers are often early users of new technology, and Ofcom reported that four in five online teenagers aged 13 to 17 had used generative AI tools. (www.ofcom.org.uk)
Parents and carers can help by keeping conversations open. Ask your child which AI tools they use, what they use them for, and whether they have seen anything that worried them. Try not to lead with panic but be curious, in fact it might even be that they enjoy teaching you and giving you a window into this exciting world of technology. Young people are more likely to talk to us when they feel they will be listened to, not judged or worried about.
A useful rule is, as with all online platforms and sites, “Do not share anything with AI that you would not want shared more widely.” This includes full names, school details, addresses, phone numbers, passwords, private worries, personal photographs, or images of friends. Remind children that AI tools may store information and that a chatbot although it can feel genuine, is not a real friend, counsellor, doctor or trusted adult.
Young people also need to know that AI can get things wrong. It can make information sound certain even when it is false. Encourage them to check important information with a trusted source, especially anything about health, relationships, law, money, schoolwork or safety. The NSPCC advises parents to talk about what AI can and cannot do, and to help children think critically about what they see online. (NSPCC)
Families should also talk clearly about images. It is never acceptable to use AI to create fake, embarrassing, sexualised or threatening images of another person. What might be described as a joke can cause serious harm and may also have serious school, safeguarding or legal consequences. Children should also be told not to upload images of themselves or others into unfamiliar AI tools.
Practical safeguards help too. Check privacy settings, app permissions, age ratings and parental controls on phones, tablets, games, browsers and social media accounts. The UK Safer Internet Centre recommends using safety settings and having regular conversations about online life, rather than relying on one single technical fix. (UK Safer Internet Centre)
Finally, agree a simple action plan. If your child sees something worrying, receives a threat, is asked for images, or finds that a fake image has been made, they should not reply, pay money, or try to manage it alone. They should save evidence, block and report the account, and speak to a trusted adult straight away. School can help, and support is also available from Childline, CEOP and the NSPCC.
AI is part of the world our children are growing into. With curiosity, clear boundaries and calm adult support, we can help them benefit from it while staying safe.