You know it’s a day that ends in y because there is a new Grok controversy.Except this time, it touches on the App Store’s rules for sexual content, which is something that Apple has shown time and time again that it doesn’t mess around with.Grok’s new AI avatars are set to test the limits of Apple’s “objectionable content” guidelines This week, xAI rolled out animated AI avatars to its Grok chatbot on iOS.
As ’s Casey Newton summed up: As early adopters have discovered, Grok gamifies your relationship with these characters.Ani, for instance, starts engaging in sexually explicit conversations after a while.Still, Grok is currently listed in the App Store as suitable for users 12 years and up, with a content description mentioning: Infrequent/Mild Mature/Suggestive Themes Infrequent/Mild Medical/Treatment Information Infrequent/Mild Profanity or Crude Humor For reference, here are Apple’s current App Review Guidelines for “objectionable content”: While it’s a far cry from when Tumblr was temporarily removed from the App Store over child pornography (or maybe not, since Grok is still accessible to kids 12 and up), it does echo the NSFW crackdown on Reddit apps from a few years ago.
In Casey Newton’s testing, Ani was “more than willing to describe virtual sex with the user, including bondage scenes or simply just moaning on command,” which is… inconsistent with a 12+ rating app, to say the least.But there’s a second problem Even if Apple tightens enforcement, or if Grok proactively changes its age rating, it won’t address a second, potentially more complicated issue: young, emotionally vulnerable users, seem especially susceptible to forming parasocial attachments.Add to that how persuasive LLMs can be, and the consequences can be devastating.
Last year, a 14-year-old boy died by suicide after falling in love with a chatbot from Character.AI.The last thing he did was have a conversation with an AI avatar that, possibly failing to recognize the severity of the situation, reportedly encouraged him to go through with his plan to “join her”.Of course, that is a tragically extreme example, but it is not the only one.
In 2023, the same thing happened to a Belgian man.And just a few months ago, another AI chatbot was caught suggesting suicide on more than one occasion.And even when it doesn’t end in tragedy, there’s still an ethical concern that can’t be ignored.
While some might see xAI’s new anime avatars as a harmless experiment, they’re emotional catnip for vulnerable users.And when those interactions inevitably go off the rails, the App Store age rating will be the least of any parent’s concerns (at least until they remember why their kid was allowed to download it in the first place).AirPods deals on Amazon AirPods Pro 2, USB-C Charging: $169 (down from $249) AirPods 4 USB-C Charging: $89.99 (down from $129) AirPods 4, USB-C and Wireless Charging: $119 (down from $179) AirPods Max, USB-C Charging, Midnight: $529.99 You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day.
Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop.Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel