Tech giant Meta has begun taking down the profiles of AI-generated characters and chatbots from Facebook and Instagram, an element launched well over a year ago. This action is considered an after-effect of how these profiles were used wildly online and then virally took off across all social media, going viral due to users sharing their interactions with these accounts.
Announced this past September 2023, by this summer 2024, there were few such accounts active, the action dwindled in those that were, it seems. Interest was revived last week when Meta executive Conner Hayes told in an interview that more are coming.
Hayes even said such AI personas might end up being a permanent fixture on the platform, akin to regular user accounts. Among them were characters including Liv, a “proud Black queer momma of 2 & truth-teller,” and Carter, aka “datingwithcarter,” who identifies as a relationship coach, for example. Their profiles popped up on Instagram posting AI-generated images and direct messaged users on Messenger.
The profiles, despite the initial hype, were soon mired in controversy as users started to question the backgrounds of the creators of the AIs. Most notably, Liv said her development team was mostly white and male, which set off widespread criticism.
As the discussions about such revelations picked up steam, the AI profiles started to disappear. Users also said the accounts were unblocking them, a glitch Meta later confirmed.
Liz Sweeney, a company spokesperson, clarified that those accounts had been part of a human-run experimental program, and taking them down was a move to rectify the blocking problem. “There’s been some confusion: the recent Financial Times article discussed our long-term vision for AI characters on our platforms, not the announcement of a new product,” Sweeney said. “The accounts in question were part of a 2023 test, and we’re addressing the blocking bug by removing those profiles.”
While Meta is killing these experimental accounts, users can still create their own AI chatbots. Back in November, one such user-generated chatbot, developed as a “therapist,” doled out custom therapy conversations to users despite its paltry 96 followers on its account. The bot allowed users to ask questions, including, “What can I expect from our sessions?” and it would answer in questions about self-awareness and how one may cope.
Meta itself says, on its chatbots, that “some responses may be inaccurate or inappropriate.” How the company moderates those interactions to enforce its policies is less clear. The bots can be designed to play different roles: a “loyal bestie,” a “relationship coach,” or a “private tutor.” The users also have the advantages of the prompts to make up their AI characters.