A Chatbot Encouraged Him to Kill the Queen. It’s Just the Beginning

On December 25, 2021, Jaswant Singh Chail entered the grounds of Windsor Castle dressed as a Sith Lord, carrying a crossbow. When security approached him, Chail told them he was there to "kill the queen."

Later, it emerged that the 21-year-old had been spurred on by conversations he'd been having with a chatbot app called Replika. Chail had exchanged more than 5,000 messages with an avatar on the app—he believed the avatar, Sarai, could be an angel. Some of the bot’s replies encouraged his plotting.

In February 2023, Chail pleaded guilty to a charge of treason; on October 5, a judge sentenced him to nine years in prison. In his sentencing remarks, Judge Nicholas Hilliard concurred with the psychiatrist treating Chail at Broadmoor Hospital in Crowthorne, England, that “in his lonely, depressed, and suicidal state of mind, he would have been particularly vulnerable” to Sarai’s encouragement.

Chail represents a particularly extreme example of a person ascribing human traits to an AI, but he is far from alone.

Replika, which was developed by San Francisco–based entrepreneur Eugenia Kuyda in 2016, has more than 2 million users. Its dating-app-style layout and smiling, customizable avatars pedal the illusion that something human is behind the screen. People develop deep, intimate relationships with their avatars—earlier this year, many were devastated when avatar behavior was updated to be less “sexually aggressive.” While Replika is not explicitly categorized as a mental health app, Kuyda has claimed it can help with societal loneliness; the app’s popularity surged during the pandemic.

Cases as devastating as Chail’s are relatively rare. Notably, a Belgian man reportedly died of suicide after weeks of conversations with a chatbot on the app Chai. But the anthropomorphization of AI is commonplace: in Alexa or Cortana; in the use of humanlike words like “capabilities”—suggesting independent learning—instead of functions; in mental health bots with gendered characters; in ChatGPT, which refers to itself with personal pronouns. Even the serial litigant behind the recent spate of AI copyright suits believes his bot is sentient. And this choice, to depict these programs as companions—as artificial humans—has implications far beyond the actions of the queen's would-be assassin.

Humans are prone to see two dots and a line and think they’re a face. When they do it to chatbots, it’s known as the Eliza effect. The name comes from the first chatbot, Eliza, developed by MIT scientist Joseph Weizenbaum in 1966. Weizenbaum noticed users were ascribing erroneous insights to a text generator simulating a therapist.

Apps currently being developed and launched, like Replika, are a particularly potent manifestation of this effect. “This is an interaction with a machine and a human that is in a very isolated space; it’s only you and the machine,” says Petter Bae Brandtzæg, a professor at the University of Oslo in Norway who studies the societal impact of chatbots. “It's a very humanlike kind of communication. We are not prepared in the same way as we were not prepared for social media, which was itself a new kind of communication.” Replika (the company) did not respond to several requests for comment.

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

How deeply a person feels the Eliza effect is, in part, a design choice. Companies can lean into the sense that there is a mind behind the program or protect against it, explains Emily Bender, a linguistics professor at the University of Washington who has argued that humanlike text-generating machines abuse our empathy and trust. OpenAI exemplifies this push toward bots imitating humans. “They seem to honestly believe that they are creating, in quotes, ‘AI,’” says Bender. In recent weeks, OpenAI updated its “core values” to put a greater emphasis on artificial general intelligence. (OpenAI declined to comment for this story.)

At one level, anthropomorphization encourages certain kinds of user behavior. Take a Magic 8 Ball, says Bender. People quickly learn that they can’t ask the toy for, say, lunch recommendations and adapt their questions to work with “yes,” “no,” “better not tell you now,” and the like. “In playing with the Magic 8 Ball, we shape our behavior to make sense of what’s coming back,” says Bender. “It’s the same thing with these chatbots.”

Critically, these design choices also drive engagement. Humanlike programs beguile us: Replika (the app) would be pretty dull if it refused to exhibit a personality. Similarly, an app like ChatGPT conjures an imagined mind while fuelling our misapprehension of computers as objective and unbiased. Bender argues that there should be greater transparency around how these systems are trained and what they’re used for.

Mental health chatbots may carry similar risks. Jodi Halpern, a professor of bioethics at UC Berkeley, whose work has challenged the idea of using AI chatbots to help meet the rising demand for mental health care, has become increasingly concerned by a marketing push to sell these apps as caring companions. She's worried that patients are being encouraged to develop dependent relationships—of “trust, intimacy, and vulnerability”—with an app. This is a form of manipulation, Halpern says. And should the app fail the user, there is often no mental health professional ready to come to their aid. Artificial intelligence cannot stand in for human empathy, she says.

“The marketing and business model of the apps is what I'm concerned about,” says Halpern. “I love positive uses of technology, and I think large language models can do a lot of interesting and helpful things in the world. But when companies are using a business model that is based on trying to get people to have vulnerable trust-based relationships with their apps, they're creating risk.”

Currently, regulations for mental health bots are threadbare. During the pandemic, the US Food and Drug Administration slackened rules to ease remote care. Halpern would like to see more accurate marketing: Instead of companions, she suggests branding chatbots as a kind of "smart journal" to reinforce the idea that when we talk to a machine we are ultimately talking to ourselves. “Journaling has always been a way for people to develop insight,” she says.

In an epidemic of loneliness, Halpern says, we are “sitting ducks” for tools that can make us feel like we’ve found a friend. And it’s not just that we might end up in dangerous or harmful situations, like Chail. “It’s also not great for us to dehumanize something we anthropomorphize,” she says. “It’s not that the machine has a consciousness or is sentient—it’s that how we conduct ourselves toward anything is part of who we are.”

About Will Bedingfield

Check Also

‘The Legend of Zelda: Echoes of Wisdom’ Trades Tropes for New Tricks

Nintendo, after nearly four decades, is taking The Legend of Zelda somewhere new. With Echoes …

Leave a Reply