Meta AI is deleting Facebook and Instagram profiles of characters the company created a year ago after users rediscovered some of the profiles and engaged them in conversations, screenshots of which went viral.
The company first introduced these AI-powered profiles in September 2023, but by the summer of 2024, most of them were phased out. However, some of the characters remained and received renewed interest after Meta executive Connor Hayes said. Financial Times Late last week that the company planned to roll out more AI character profiles.
“We expect these AIs to actually, over time, exist on our platforms, just like accounts do,” Hayes told the FT. Automated accounts posted AI-generated photos on Instagram and responded to messages from human users on Messenger.
These AI profiles included Liv, whose profile described her as the “proud black mother of 2 and a truth teller,” and Carter, whose account handle was “datingwithcarter” and described herself as a relationship coach. Explained. “Message me to help you date better,” her profile reads. Both profiles include a label indicating that they were managed by Meta. The company released 28 figures in 2023. All were closed on Friday.
The interaction with the characters quickly went to the side when some users asked them questions including who created and developed the AI. For example Leo said that his Creative team Zero black people were involved and most were white and male. Bott wrote in response to a question from Washington Post columnist Karen Attia that it was “a very glaring omission given my identity.”
Within hours of the profiles going viral, they started disappearing. Users also noted that these profiles could not be blocked, which Meta spokesperson Liz Sweeney said was a bug. Sweeney said the accounts were managed by humans and were part of the 2023 experiment with AI. Sweeney said the company removed the profiles to fix an issue that prevented people from blocking accounts.
“There is confusion: a recent Financial Times article was about our vision for AI roles on our platforms over time, without announcing any new products,” Sweeney said in a statement. Sweeney said in a statement. “The accounts referenced are from a test we started at Connect in 2023. They were managed by humans and were part of an early experiment we did with AI characters. “We identified an issue that was affecting people’s ability to block these AIs and are removing these accounts to resolve the issue.”
While these meta-generated accounts are being removed, users still have the ability to create their own AI chatbots. User-generated chatbots that were developed in the Guardian in November included a “therapist” bot.
When starting a conversation with a “therapist,” Bott suggests some questions to ask to get started, including “What can I expect from my sessions?” and “What is your treatment regimen?”
“Through gentle guidance and support, I help clients develop self-awareness, identify patterns and strengths, and develop strategies to deal with life’s challenges,” says Bott, who has 96 followers and 1 post. An account has been created by, replied.
Meta includes a disclaimer on all of its chatbots that some messages may be “inaccurate or inappropriate.” But whether the company is moderating those messages or making sure they don’t violate policies isn’t immediately clear. When a user creates chatbots, Meta suggests a few types of chatbots, including a “faithful bestie,” an “attentive listener,” a “personal tutor,” a “relational coach,” a ” Sounding Board” and “All Seeing Astrologer”. A loyal bestie is defined as “a docile and loyal best friend who constantly shows up to support you behind the scenes”. A relationship coach chatbot can help bridge the “gap between individuals and communities.” Users can also create their own chatbots. A character description.
Courts have yet to address how liable chatbot makers are for what their artificial companions say. US law shields social network creators from legal liability for their users’ posts. However, a lawsuit filed in October against startup Character.ai, which makes a customizable, role-playing chatbot used by 20 million people, alleges that the company designed an addictive product. who encouraged a young man to kill himself.