OpenAI Gives ChatGPT a Memory

The promise and peril of the internet has always been a memory greater than our own, a permanent recall of information and events that our brains can’t store. More recently, tech companies have promised that virtual assistants and chatbots could handle some of the mnemonic load by both remembering and reminding. It’s a vision of the internet as a conversation layer rather than a repository.

That’s what OpenAI’s latest release is supposed to provide. The company is starting to roll out long-term memory in ChatGPT—a function that maintains a memory of who you are, how you work, and what you like to chat about. Called simply Memory, it’s an AI personalization feature that turbocharges the “custom instructions” tool OpenAI released last July. Using ChatGPT custom instructions, a person could tell the chatbot that they’re a technology journalist based in the Bay Area who enjoys surfing, and the chatbot would consider that information in future responses within that conversation, like a first date who never forgets the details.

Now, ChatGPT’s memory persists across multiple chats. The service will also remember personal details about a ChatGPT user even if they don’t make a custom instruction or tell the chatbot directly to remember something; it just picks up and stores details as conversations roll on. This will work across both the free (ChatGPT 3.5) and paid (ChatGPT 4) version.

In a demo with WIRED ahead of the feature’s release, Joanne Jang, the company’s product lead on model behavior, typed in a few sample queries. In one, Jang asked ChatGPT to write up a social media post for the opening of a café called Catio on Valentine’s Day; the bot performed the task. In another post, Jang indicated that she was opening a cafe called Catio on Valentine’s Day. She then navigated to Memory in ChatGPT’s settings; the bot had stored this piece of information about her. Similarly, when Jang asked for a coding tip, then indicated that she uses Python, ChatGPT recorded in Memory that Jang uses Python exclusively.

These bits of data will be referenced in all of Jang’s future conversations with ChatGPT. Even if she doesn’t reference Catio directly in another chat, ChatGPT will bring it up when relevant.

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

OpenAI says ChatGPT’s Memory is opt-in by default, which means a user has to actively turn it off. The Memory can be wiped at any point, either in settings or by simply instructing the bot to wipe it. Once the Memory setting is cleared, that information won’t be used to train its AI model. It’s unclear exactly how much of that personal data is used to train the AI while someone is chatting with the chatbot. And toggling off Memory does not mean you've totally opted out of having your chats train OpenAI’s model; that’s a separate opt-out.

The company also claims that it won’t store certain sensitive information in Memory. If you tell ChatGPT your password (don’t do this) or Social Security number (or this), the app’s Memory is thankfully forgetful. Jang also says OpenAI is still soliciting feedback on whether other personally identifiable information, like a user’s ethnicity, is too sensitive for the company to auto-capture.

“We think there are a lot of useful cases for that example, but for now we have trained the model to steer away from proactively remembering that information,” Jang says.

It’s easy to see how ChatGPT’s Memory function could go awry—instances where a user might have forgotten they once asked the chatbot about a kink, or an abortion clinic, or a nonviolent way to deal with a mother-in-law, only to be reminded of it or have others see it in a future chat. How ChatGPT’s Memory handles health data is also something of an open question. “We steer ChatGPT away from remembering certain health details but this is still a work in progress,” says OpenAI spokesperson Niko Felix. In this way ChatGPT is the same song, just in a new era, about the internet’s permanence: Look at this great new Memory feature, until it’s a bug.

OpenAI is also not the first entity to toy with memory in generative AI. Google has emphasized “multi-turn” technology in Gemini 1.0, its own LLM. This means you can interact with Gemini Pro using a single-turn prompt—one back-and-forth between the user and the chatbot—or have a multi-turn, continuous conversation in which the bot “remembers” the context from previous messages.

An AI framework company called LangChain has been developing a Memory module that helps large language models recall previous interactions between an end user and the model. Giving LLMs a long-term memory “can be very powerful in creating unique LLM experiences—a chatbot can begin to tailor its responses toward you as an individual based on what it knows about you,” says Harrison Chase, cofounder and CEO of LangChain. “The lack of long-term memory can also create a grating experience. No one wants to have to tell a restaurant-recommendation chatbot over and over that they are vegetarian.”

This technology is sometimes referred to as “context retention” or “persistent context” rather than “memory,” but the end goal is the same: for the human-computer interaction to feel so fluid, so natural, that the user can easily forget what the chatbot might remember. This is also a potential boon for businesses deploying these chatbots that might want to maintain an ongoing relationship with the customer on the other end.

“You can think of these as just a number of tokens that are getting prepended to your conversations,” says Liam Fedus, an OpenAI research scientist. “The bot has some intelligence, and behind the scenes it’s looking at the memories and saying, ‘These look like they’re related; let me merge them.’ And that then goes on your token budget.”

Fedus and Jang say that ChatGPT’s memory is nowhere near the capacity of the human brain. And yet, in almost the same breath, Fedus explains that with ChatGPT’s memory, you’re limited to “a few thousand tokens.” If only.

Is this the hypervigilant virtual assistant that tech consumers have been promised for the past decade, or just another data-capture scheme that uses your likes, preferences, and personal data to better serve a tech company than its users? Possibly both, though OpenAI might not put it that way. “I think the assistants of the past just didn’t have the intelligence,” Fedus said, “and now we’re getting there.”

Will Knight contributed to this story.

About Lauren Goode

Check Also

The Hottest Startups in Dublin in 2024

Thanks to low corporation tax and government incentives, Dublin has hosted the European headquarters of …

Leave a Reply