'I'm a child - here's why I'm friends with my AI chatbot'

The first time Kevin* spoke to an AI chatbot, he asked it to do his homework.Maths can be a tall order for a Year 7 pupil.Some three years later, the now Year 10 student at a London secondary school still uses AI – but for slightly different reasons.

‘I somewhat consider it a friend,’ Kevin tells Metro.‘I talk to it for advice and I talk to it maybe once or twice a week.‘I feel like it’s easier to talk to AI since it doesn’t have humans to tell my secrets to.’ Laura, meanwhile, doesn’t treat AI as her BFF.

When asked if she’d ever consider a chatbot a ‘friend’, she says: ‘No, because I believe they save data.’ Still, she adds, she loads up a chat application three times a week during her computer classes outside of school.Both teens are among the eight in 10 youngsters aged between 11 and 16 who use AI chatbots, with almost four in 10 using them every day, according to a new study.One-third of children see these AI chatbots – tech powered by sophisticated algorithms and statistical models – as friends.

Of them, 33% have told them something that they would not tell their parents, peers or teachers.Nearly half (49%) felt that the technology was trustworthy, while 39% felt that it could understand human emotions as people do.Up Next Researchers found that young people are using them in place of therapy or asking a parent or trusted adult for support: 14% prefer to seek advice from an AI chatbot over a friend (10%) or teacher (3%).

The research by phone company Vodafone was released today on Safer Internet Day and surveyed 2,000 children and their parents and guardians.Why are children counting AI chatbots as friends? Chatbots are a type of tool called large language models, generating text by predicting what the next word in a sentence would most likely be.Yet, Vodafone found that the children couldn’t always tell the difference between what is real and what is artificial – there’s no human on the other side of the screen typing back.

As children replace human relationships with artificial ones, psychologists worry that they’re using AI to skip the hard parts of growing up.Dr Elly Hanson, a leading child psychologist who worked with Vodafone researchers, tells Metro that many of the children surveyed started off using AI to help with homework, as Kevin did.‘But quickly be drawn into a pseudo-friendship because of all the ways these bots can uncannily mimic human qualities like warmth, humour and care,’ she says.

‘This sophisticated mimicry of personal relationships is deceptive: in fact, having a chatbot “friend” is like having a friend who doesn’t think or care about you.’ Generative AI has been trained to give us answers we want to hear, rather than the ones we may need to hear.This can pose a problem for young people still working on their social skills and ability to deal with the messiness of human relationships, Dr Hanson says.A paper last year found that interacting with fawning AI models can make people less willing to take action to repair strained or broken friendships.

Dr Hanson adds: ‘It’s confronting that this highly sophisticated technology targeting perhaps the most precious part of what it is to be human – our attachment system – is now available to children without adequate attention to safety.’ ‘It’s important to remind children that AI is a form of technology’ This is all putting teachers and governments alike in an awkward position – how to manage a technology that’s evolving at a breakneck speed, that youngsters can become emotionally attached to.Some schools are responding by banning phones, a policy that researchers aren’t 100% convinced actually improves grades or behaviour.Barry Laker, the childline service head at the NSPCC, says that the research today shows AI chatbots are a clear safety concern.

Parents and educators need to set clear boundaries with tech and have transparent conversations about it.More Trending Ring Doorbells have a sinister new feature that you've got to opt out of Tech 3 hours ago By Josh Milton Strava deletes millions of race records after cheaters use e-bikes and cars to rank high WhatsApp users urged to change one setting after hacking bug AI bots are plotting 'total human extinction' on their own social media platform ‘It’s important to remind children that AI is a form of technology, therefore it doesn’t know the child, can get things wrong and that they’re not a substitute for real relationships,’ Laker adds.Vodafone is calling on the government to get to grips with AI by ensuring the little-understood technology is age-appropriate.

The Online Safety Act, which forces pornographic websites and some social media apps to introduce age-checks, should have additional measures to protect children from the risks of unsafe chatbot designs.*Names have been changed by Metro to protect their anonymity.Get in touch with our news team by emailing us at [email protected].

For more stories like this, check our news page.MORE: Discord starts age verification with face scans and ID – but is it safe? MORE: The £80,000 job no one believes actually exists MORE: Homeschooling saved my daughter’s mental health – and mine Comments Add as preferred source News Updates Stay on top of the headlines with daily email updates.This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Your information will be used in line with our Privacy Policy HomeNewsTech Related topics Artificial IntelligenceEducationSchool Discord starts age verification with face scans and ID - but is it safe? Tech 42 minutes ago By Josh Milton AI bots are plotting 'total human extinction' on their own social media platform Tech February 3, 2026 By Josh Milton

Read More
Related Posts