Up Next Ever asked ChatGPT what the meaning of life is on a particularly slow day at work? It may try to give you an answer by confidently reinforcing your own worldview, mimicking human emotional pulls, or just make something up entirely in a so-called ‘hallucination’.Hallucinations occur when the AI is incentivised to make guesses rather than simply admitting it doesn’t know the answer, which can be particularly dangerous if being used in a medical context.This educated guessing has been damaging to the brand amid reliability concerns with the AI model even admitting, when asked, that it can be ‘confidently wrong’.
This ‘overconfidence’ has seen TikTokers laughing openly when AI refuses to say if a human’s stupid hat looks ridiculous or is steadfast in its belief that December is spelt with an X.But this hubris could be fatal, especially considering we are relying on AI models to drive us around or spot health problems.Now, researchers have developed a solution that enables AI to recognise situations with unfamiliar or unseen knowledge.
They say they used clues from the way the human brain solves the issue.Mimicking brain waves, they tried to set AI’s initial confidence to a low level, close to chance, and significantly reduce its overconfidence bias.Their training method helps models first learn the state of ‘I don’t know anything yet’ – like a newborn baby – before actual learning begins.
‘While conventional models tend to give incorrect answers with high confidence even for data they have not encountered during training, models with warm-up training showed a clear improvement in their ability to lower confidence and recognise that they ‘do not know’, researchers from the Korea Advanced Institute of Science and Technology explained.Trending Now Wes Streeting accused of 'bottling leadership' as Keir lives to fight another day Politics 5 hours ago By Luke Alsford Putin stockpiling 'unkillable' fibre-optic drones for 'future assault on NATO' Teaching assistant 'can't remember why' she spat in child's face in police interrogation Star-Wars themed yacht that's inspired by Darth Vader's destroyer ship ‘This study demonstrates that by incorporating key principles of brain development, AI can recognise its own knowledge state in a way that is more similar to humans,’ Se-Bum Paik, an author of the study published in the journal Nature Machine Intelligence, said.‘This is important because it helps AI understand when it is uncertain or might be mistaken, not just improve how often it gives the right answer.’ AI can completely hallucinate by confidently answering a question when the correct answer is ‘I don’t know.’ Inventing a book citation that doesn’t exist Claiming a person said something they never said Mixing up dates or facts What doesn't ChatGPT know Metro asked their own AI ‘what don’t you know?’ and it appears the answer is ‘quite a lot’.
This is what it said: 1.Anything beyond my knowledge cutoff (2025)I don’t automatically know events, discoveries, or changes that happened after that—unless you tell me or I look them up.So very recent news, new laws, or current prices can be out of date.
2.Your personal life (unless you share it)I don’t know your thoughts, experiences, identity, or situation unless you explicitly tell me.I’m not reading minds or accessing private data.
3.Unobservable or unknowable factsSome things just aren’t known to anyone—like: The exact future (I can only estimate probabilities) Unsolved mysteries (e.g., what fully caused certain historical events) Questions that don’t have clear answers (“What’s the meaning of life?” depends on perspective) 4.Real-time information (by default)I don’t inherently “see” live data like: Current weather Stock prices Live sports scoresI can get these if I use tools, but I don’t constantly track them.
5.Hidden or private dataI don’t have access to: Personal databases Private company records Your files, messages, or accounts (unless you share them here) 6.Perfect accuracyEven within what I do know, I can: Misinterpret a question Give outdated or incomplete info Be confidently wrong sometimes OpenAI is currently valued at $852 billion.
Several lawsuits have sought damages from AI and tech companies over the influence of chatbots and social media on loved ones’ mental health.MORE: ChatGPT ‘advised school shooter to target kids to get more attention’ MORE: Grok ‘spreads election misinformation’ saying migration is at a ‘record-high’ MORE: Games Inbox: Are you saving up for the PS6? Comments Add as preferred source News Updates Stay on top of the headlines with daily email updates.This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Your information will be used in line with our Privacy Policy HomeNewsTech Related topics Artificial IntelligenceChatGPTTech reviews Star-Wars themed yacht that's inspired by Darth Vader's destroyer ship Tech 11 hours ago By Rory McKeown Major solar flare to 'brush past' Earth tomorrow - could it cause power outages? Tech 11 hours ago By Josh Milton
Read More