Smarter AI Assistants Could Make It Harder to Stay Human

Researchers and futurists have been talking for decades about the day when intelligent software agents will act as personal assistants, tutors, and advisers. Apple produced its famous Knowledge Navigator video in 1987. I seem to remember attending an MIT Media Lab event in the 1990s about software agents, where the moderator appeared as a butler, in a bowler hat. With the advent of generative AI, that gauzy vision of software as aide-de-camp has suddenly come into focus. WIRED’s Will Knight provided an overview this week of what’s available now and what’s imminent.

I’m concerned about how this will change us, and our relations with others, over the longer term. Many of our interactions with others will be mediated by bots acting in our stead. Robot assistants are different from human helpers: They don’t take breaks, they can instantly access all the world’s knowledge, and they won’t require paying a living wage. The more we use them, the more tempting it will become to turn over tasks we once reserved for ourselves.

Right now the AI assistants on offer are still unrefined. We’re not yet at the point where autonomous bots will routinely take over activities where screw-ups can’t be tolerated, like booking flights, making doctor’s appointments, and managing financial portfolios. But that will change, because it can. We seem destined to live our lives like long-haul airline pilots—after setting a course, we can lean back in the cockpit as AI steers the plane, switching to manual mode when necessary. The fear is that, eventually, it might be the agents who decide where the plane is going in the first place.

Doomerism aside, all of us will have to deal with someone else’s supersmart and possibly manipulative agents. We’ll turn over control of our own daily activities and everyday choices, from shopping lists to appointment calendars, to our own AI assistants, who will also interact with the agents of our family, friends, and enemies. As they gain independence, our automated helpers may end up making decisions or deals on our behalf that aren’t good at all.

For an upbeat view of this future, I consult Mustafa Suleyman. A cofounder of AI startup DeepMind, now the heart of Google’s AI development, he’s now the CEO of Inflection.ai, a company developing chatbots. Suleyman has also recently taken residency on The New York Times bestseller list for his book The Coming Wave, which suggests how humans can confront the existential perils of AI. Overall, he’s an optimist and of course has a rosy outlook about software agents. He describes the bot his company makes, Pi, as a personal “chief of staff” that provides not only wisdom but empathetic encouragement and kindness.

“Today Pi is not able to book you restaurants or arrange a car or, you know, buy things for you,” Suleyman says. “But in the future, it will have your contractual and legal proxy, which means that you've granted permissions to enter into contracts on your behalf, and spend real money and bind you to material agreements in the real world.” Also on the road map: Pi will make phone calls on its owner’s behalf and negotiate with customer service agents.

That seems fair, because right now, too many of those service agents are already bots, and—maybe by design?—not open to reasonable arguments that their corporate employers screw over their own customers. Inevitably, we’ll be launching our AIs into negotiations with other AIs in all areas of life. Suleyman acknowledges that we don’t want those bots to get too cozy with each other or interact in ways not open to human inspection. “We actually want AI-to-AI communication to be limited to plain English,” says Suleyman. “That way, we can audit it.”

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

That’s a well-meaning concern but it would probably force agents to work more slowly and maybe even prevent some of the more innovative solutions that our superintelligent assistants might come up with. I suspect that the temptation to make AI-to-AI communication efficient and effective will overwhelm that nice sentiment.

Another potential nightmare: ad-funded bots that direct owners to sponsored products and services. Suleyman doesn’t love the idea, but he doesn’t seem to rule it out. He says that his bots, delivering tremendous value, won’t come cheap. “You regularly pay a lawyer hundreds of dollars per hour. But for some reason, we've just become allergic to paying more than 10 bucks a month for any service online. That will have to change.” So will those who can’t afford the fee be offered ad-supported versions? He acknowledges not everyone will want to pay for access to the technology. In any case, Suleyman says, trust and accountability is essential. “It will take many years before we feel comfortable with AI having autonomous actions,” he says. “I don't think we should be doing that anytime soon.”

To me, that’s the worry—once we get comfortable, we’re finished. When I sought validation in a scan of research papers, my attention was snared by the title “The Power to Harm: AI Assistants Pave the Way to Unethical Behavior.” Coauthored by University of Southern California scientists Jonathan Gratch and Nathanael Fast, it hypothesizes that intelligent agents can democratize an unsavory habit of rich people, who outsource their bad behavior through lawyers, spokespeople, and thuggish underlings. “We review a series of studies illustrating how, across a wide range of social tasks, people may behave less ethically and be more willing to deceive when acting through AI agents,” they write.

I caught up with Gratch, who spoke to me from a conference in Würzburg, Germany. “Every man or woman can have their personal assistant do things on their behalf,” he says. “Our research suggests people might be willing to tell their assistants to do things that are more ethically questionable than they themselves would be willing to do.”

Gratch has been researching the possible impact of intelligent agents for years. In the past year the field has undergone a transformation similar to a lightning bolt striking some nebbish who suddenly takes on superpowers. “It used to be that you spend a whole PhD thesis trying to build the frickin’ agent that you want to test,” he says. “And now, with two days playing around with GPT or something, you can get something that interacts with people and looks pretty good.” Gratch says his field is now infused with a blend of excitement and angst.

“The technology will make individual people more powerful, opening up free time,” he says. “The one concern I have is, what do people do with that power?” For instance, if I had directed an agent to call him on my behalf, he says, a potential human connection would have been lost. “Those personal connections are what keep us nice and promote empathy,” Gratch says. “When AI makes it more about algorithms and laws and transactions, it diminishes us as people.”

Gratch’s field, once centered on hypotheticals, can now feel like a guide to what commercial AI services are around the corner. Consider some of the presentations at the conference in Würzburg: “Effects of Agent’s Embodiment in Human-Agent Negotiations,” “Accommodating User Expressivity While Maintaining Safety for a Virtual Alcohol Misuse Counselor,” “The Effect of Rapport on Delegation to Virtual Agents.” (Reassuringly, other papers were about maintaining ethics in a world full of agents.) Gratch saw similar work at a conference he recently attended at MIT. All the major tech companies were there too, he says, and he expects them to hire many of his students.

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

Gratch describes a conference session this week where researchers explored how AI software could manipulate people with emotion or nonverbal behavior or gestures, and find the most effective techniques. “You can fine-tune ChatGPT to do all that and sell you a more expensive television,” he says. Do you feel empowered yet?

Time Travel

In December 2010, WIRED jumped the gun a bit by declaring, “The AI Revolution Is On.” In my essay, I talked about the difficulty of figuring out what the products of AI were up to, even if they wind up doing good.

We are engaged in a permanent dance with machines, locked in an increasingly dependent embrace. And yet, because the bots' behavior isn't based on human thought processes, we are often powerless to explain their actions. Wolfram Alpha, the website created by scientist Stephen Wolfram, can solve many mathematical problems. It also seems to display how those answers are derived. But the logical steps that humans see are completely different from the website's actual calculations. "It doesn't do any of that reasoning," Wolfram says. "Those steps are pure fake. We thought, how can we explain this to one of those humans out there?"

The lesson is that our computers sometimes have to humor us, or they will freak us out. Eric Horvitz—now a top Microsoft researcher and a former president of the Association for the Advancement of Artificial Intelligence—helped build an AI system in the 1980s to aid pathologists in their studies, analyzing each result and suggesting the next test to perform. There was just one problem—it provided the answers too quickly. "We found that people trusted it more if we added a delay loop with a flashing light, as though it were huffing and puffing to come up with an answer," Horvitz says.

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

But we must learn to adapt. AI is so crucial to some systems—like the financial infrastructure—that getting rid of it would be a lot harder than simply disconnecting HAL 9000's modules. "In some sense, you can argue that the science fiction scenario is already starting to happen," Thinking Machines' Hillis says. "The computers are in control, and we just live in their world." Wolfram says this conundrum will intensify as AI takes on new tasks, spinning further out of human comprehension. "Do you regulate an underlying algorithm?" he asks. "That's crazy, because you can't foresee in most cases what consequences that algorithm will have."

Ask Me One Thing

Shanti asks, “Will there ever be a successor to email? What would it take to supersede this default of the social internet?”

Thanks for the emailed question, Shanti. Email does not have the power that it used to wield. A lot of messages once zipped into inboxes now reach people via texts, direct messages on social networks, or Slack. I suspect that other services will arise that further erode the once-all-powerful hegemony of email. Yet as you suggest, for many things email is indeed the default, even though inboxes are perpetually out of control. I remember BIll Gates once visited me at my Newsweek office to tell me that spam would be eradicated within a year. In a few months I will celebrate the 20th anniversary of that visit, and I get dozens of junk messages every day, with an annoying percentage evading the spam filter.

Spam aside, I wouldn’t want to get rid of email. It’s hard to imagine an easier way to get introduced to someone, to send or receive files, or to maintain an archive of interactions. That last function is priceless. Because I can instantly search decades of messages. Whenever a familiar name pops up, I can quickly unearth a previous exchange that I had completely forgotten about.

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

On the other hand, if someone were to suggest mandatory minimum sentences for those who unilaterally put me on mailing lists, especially commercial ones, I wouldn’t speak out against it.

You can submit questions to mail@wired.com. Write ASK LEVY in the subject line.

End Times Chronicle

Finally the ranking we’ve all been waiting for: best cities for surviving a zombie apocalypse. NYC is #2!

Last but Not Least

Besides giving advice and arguing with customer service agents, bots can write. But how well?

Dall-E 3 makes it even easier to put illustrators out of business.

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

Inventing Crispr is not enough for Nobel winner Jennifer Doudna. She’s now planning to edit the biome.

Inside Elon Musk’s Neuralink, where the monkey business indeed does include dead monkeys.

If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more. Please also consider subscribing to WIRED

About Steven Levy

Check Also

The Hottest Startups in Helsinki in 2024

Helsinki’s startup scene evolved around behemoths such as Nokia, games giant Supercell, and food delivery …

Leave a Reply