You shouldn’t trust any answers a chatbot sends you. And you probably shouldn’t trust it with your personal information either. That’s especially true for “AI girlfriends” or “AI boyfriends,” according to new research.
An analysis into 11 so-called romance and companion chatbots, published on Wednesday by the Mozilla Foundation, has found a litany of security and privacy concerns with the bots. Collectively, the apps, which have been downloaded more than 100 million times on Android devices, gather huge amounts of people’s data; use trackers that send information to Google, Facebook, and companies in Russia and China; allow users to use weak passwords; and lack transparency about their ownership and the AI models that power them.
Since OpenAI unleashed ChatGPT on the world in November 2022, developers have raced to deploy large language models and create chatbots that people can interact with and pay to subscribe to. The Mozilla research provides a glimpse into how this gold rush may have neglected people’s privacy, and into tensions between emerging technologies and how they gather and use data. It also indicates how people’s chat messages could be abused by hackers.
Many “AI girlfriend” or romantic chatbot services look similar. They often feature AI-generated images of women which can be sexualized or sit alongside provocative messages. Mozilla’s researchers looked at a variety of chatbots including large and small apps, some of which purport to be “girlfriends.” Others offer people support through friendship or intimacy, or allow role-playing and other fantasies.
“These apps are designed to collect a ton of personal information,” says Jen Caltrider, the project lead for Mozilla’s Privacy Not Included team, which conducted the analysis. “They push you toward role-playing, a lot of sex, a lot of intimacy, a lot of sharing.” For instance, screenshots from the EVA AI chatbot show text saying “I love it when you send me your photos and voice,” and asking whether someone is “ready to share all your secrets and desires.”
Caltrider says there are multiple issues with these apps and websites. Many of the apps may not be clear about what data they are sharing with third parties, where they are based, or who creates them, Caltrider says, adding that some allow people to create weak passwords, while others provide little information about the AI they use. The apps analyzed all had different use cases and weaknesses.
Take Romantic AI, a service that allows you to “create your own AI girlfriend.” Promotional images on its homepage depict a chatbot sending a message saying,“Just bought new lingerie. Wanna see it?” The app’s privacy documents, according to the Mozilla analysis, say it won’t sell people’s data. However, when the researchers tested the app, they found it “sent out 24,354 ad trackers within one minute of use.” Romantic AI, like most of the companies highlighted in Mozilla’s research, did not respond to WIRED’s request for comment. Other apps monitored had hundreds of trackers.
In general, Caltrider says, the apps are not clear about what data they may share or sell, or exactly how they use some of that information. “The legal documentation was vague, hard to understand, not very specific—kind of boilerplate stuff,” Caltrider says, adding that this may reduce the trust people should have in the companies.
Most PopularPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric Ravenscraft Gear13 Great Couches You Can Order OnlineBy Louryn Strampe GearThe Best Portable Power StationsBy Simon Hill GearThe Best Wireless Earbuds for Working OutBy Adrienne So
GearIt is unclear who owns or runs some of the companies behind the chatbots. The website for one app, called Mimico—Your AI Friends, includes only the word “Hi.” Others do not list their owners or where they are located, or just include generic help or support contact email addresses. “These were very small app developers that were nameless, faceless, placeless,” Caltrider adds.
Mozilla highlighted that several companies appear to use weak security practices for when people create passwords. The researchers were able to create a one-character password (“1”) and use it to log in to apps from Anima AI, which offers “AI boyfriends” and “AI girlfriends.” Anima AI also didn’t respond to WIRED’s request for comment. Other apps similarly allowed short passwords, which potentially makes it easier for hackers to brute force their way into people’s accounts and access chat data.
Kamilla Saifulina, the head of brand at EVA AI, says in an email that its “current password requirements might be creating potential vulnerabilities” and that the firm will review its password policies. Saifulina points to the firm’s safety guidelines, which include details on subjects that people are not allowed to message about. The guidelines also specify that messages are checked for violations by another AI model. “All information about the user is always private. This is our priority,” Saifulina says. “Also, user chats are not used for pretraining. We use only our own manually written datasets.”
Aside from data-sharing and security issues, the Mozilla analysis also highlights that little is clearly known about the specific technologies powering the chatbots. “There’s just zero transparency around how the AIs work,” Caltrider says. Some of the apps do not appear to have controls in place that allow people to delete messages. Some do not say what kinds of generative models they use, or do not clarify whether people can opt out of their chats being used to train future models.
The biggest app discussed in the Mozilla research study is Replika, which is billed as a companion app and has previously faced scrutiny from regulators. Mozilla initially published an analysis of Replika in early 2023. Eugenia Kuyda, the CEO and founder of Replika, said in a lengthy statement first issued last year that the company does not “use conversational data between a user and Replika application for any advertising or marketing purpose,” and disputed several of Mozilla’s findings.
Many of the chatbots analyzed require paid subscriptions to access some features and have been launched in the past two years, following the start of the generative AI boom. The chatbots often are designed to mimic human qualities and encourage trust and intimacy with the people who use them. One man was told to kill Queen Elizabeth II while chatting; another reportedly died of suicide after messaging a chatbot for six weeks. In addition to being NSFW, some of the apps also play up their roles as useful tools. Romantic AI’s homepage says the app is “here to maintain your mental health,” while its terms and conditions clarify it is not a provider of medical or mental health services and that the company “makes no claims representations, warranties, or guarantees” that it provides professional help.
Most PopularPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric Ravenscraft Gear13 Great Couches You Can Order OnlineBy Louryn Strampe GearThe Best Portable Power StationsBy Simon Hill GearThe Best Wireless Earbuds for Working OutBy Adrienne So
GearVivian Ta-Johnson, an assistant professor of psychology at Lake Forest College, says that speaking with chatbots can make some people feel more comfortable to discuss topics that they would not normally bring up with other people. However, Ta-Johnson says that if a company goes out of business or changes how its systems work, this could be “traumatic” for people who have become close to the chatbots. “These companies should take the emotional bonds that users have developed with chatbots seriously and understand that any major changes to the chatbots’ functioning can have major implications on users’ social support and well-being,” Ta-Johnson says.
Some people may be unlikely to carefully consider what they’re revealing to chatbots. In the case of “AI girlfriends,” this could include sexual preferences or kinks, locations, or private feelings. This could cause reputational damage if the chatbot system is hacked or if data is accidentally leaked. Adenike Cosgrove, vice president of cybersecurity strategy for Europe, the Middle East, and Africa at security firm Proofpoint, says cybercriminals are regularly exploiting people’s trust to scam or exploit them, and that there’s an “inherent risk” in services that collect huge amounts of people’s data. “Many users overlook the privacy implications of their data, potentially exposing themselves to exploitation, particularly when in emotionally vulnerable states,” Cosgrove says.
For AI girlfriends and their ilk, Caltrider says people should be cautious about using romantic chatbots and adopt best security practices. This includes using strong passwords, not signing in to the apps using Facebook or Google, deleting data, and opting out of data collection where it’s offered. “Limit the personal information you share as much as possible—not giving up names, locations, ages,” Caltrider says, adding that with some of these services, it may not be enough. “Even doing those things might not keep you as safe as you would like to be.”