Meet the Lawyer Leading the Human Resistance Against AI

On a Friday morning in October, in the lobby of a sleek San Francisco skyscraper, Matthew Butterick was headed toward the elevators when a security guard stopped him. Politely, the guard asked if he was lost.

It was an honest mistake. In checkerboard Vans, a black baseball cap, and a windbreaker, Butterick didn’t look like the typical corporate warrior. He looked more like the type of guy who makes fun of the typical corporate warrior. He explained, equally politely, that he was in fact a lawyer with a legitimate reason to be in the building. His co-counsel, Joseph Saveri, leads an antitrust and class-action firm headquartered there.

Apologies, sir—right this way.

He might not look like it, but Butterick is the unlikely driving force behind the first wave of class-action lawsuits against big artificial-intelligence companies. He’s on a mission to make sure writers, artists, and other creative people have control over how their work is used by AI.

This is not where he expected to be. Until recently, Butterick wasn’t a practicing attorney at all, and he’s certainly not anti-technology. For most of his life, he’s worked as a self-employed designer and programmer, tinkering with speciality software. “I’m just a dude in his house,” he says, shrugging. “No assistant, no staff.” His idea of fun? Writing an app from scratch for personal use. He flies into the Bay Area for the requisite court dates—all the lawsuits have been filed in the Northern District of California—but he still spends most of his time working solo from the Los Angeles home he shares with his wife.

Yet when generative AI took off, he dusted off a long-dormant law degree specifically to fight this battle. He has now teamed up with Saveri as co-counsel on four separate cases, starting with a lawsuit filed in November 2022 against GitHub, claiming that the Microsoft subsidiary’s AI coding tool, Copilot, violates open-source licensing agreements. Now, the pair represent an array of programmers, artists, and writers, including comedian Sarah Silverman, who allege that generative AI companies are infringing upon their rights by training on their work without their consent.

The complaints all take slightly different legal approaches, but together, they represent a crusade to give creative people a say in how their work is used in AI training. “It’s pushback,” Butterick says. It’s a mission that AI companies vigorously oppose, because it frames the way they train their tools as fundamentally corrupt. Even many copyright and intellectual property scholars see it as a long shot.

The big question is: What will the courts think?

These are some of the most closely watched legal brawls of the moment. For Silicon Valley, the dawn of the AI age has been a spiritual revival; after a decade of increasing public wariness about tech’s influence on the world, the roaring enthusiasm for tools like ChatGPT has created a new boom. Call it the Second Age of “Move Fast and Break Things.” There’s plenty of hype, and eye-popping valuations. (OpenAI’s current reported value, for example, is $80 billion.) But it’s distinct from the recent hype cycles around the metaverse and crypto in that generative AI is actually useful. It’s still a gold rush, for sure. This time, though, the hills aren’t hollow, and the industry knows it. These lawsuits, which allege that OpenAI, Meta, Stability AI, and other companies broke the law when they built their tools, threaten the steamroller momentum of the generative AI movement. The stakes are sky-high.

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

The outcomes could help entrench the industry as we know it—or force it to make radical changes. And while a security guard might not have recognized Butterick, the legal teams at AI companies certainly know him by now. Their futures could depend on how well, or poorly, he makes his cases.

Butterick grew up in New Hampshire. He was a strong student, good enough to get into Harvard in the late ’80s. When he was there, though, he felt alienated from his more conventionally ambitious classmates. They were already thinking about things like law school. He was drawn to a more esoteric world. Tucked in the basement of his dormitory in Cambridge, Massachusetts, a long-running printing press called Bow & Arrow Press operated a workshop, giving students a unique opportunity to learn traditional printing techniques. It was a cozy, beloved hangout, with whitewashed, poster-covered walls, machinery that looked ancient, and an atmosphere that attracted offbeat aesthetes. When Butterick found it, his life changed.

He became obsessed with typography. He started working in font design when he was still in school. “People in my life thought it was a ridiculous thing to do,” he says. He loved playing with the old tools, but even more than that, he loved thinking about new ways to create beautiful typefaces. After he graduated in 1992, he had his own ambitions: He’d heard there were exciting things happening in the tech world in San Francisco, and it seemed like the perfect place for a guy who wanted to bring typography into the computer age. Two years later, he moved west.

Like so many young Ivy Leaguers who show up in the Bay Area hoping to make a name for themselves in tech, Butterick decided he might as well try his hand at a startup. “My dotcom adventure,” he calls it, sounding half-embarrassed. He founded a web design company, Atomic Vision. By the time he was 28, he had around 20 employees. But he didn’t love managing people. When an opportunity to sell the company came in 1999, he took it.

Flush with cash and unsure what to do next, Butterick figured he’d follow in the footsteps of countless other young adults who don’t know what they want out of life: He went to grad school. He enrolled at UCLA to get a law degree. After graduating, he started a website called Typography for Lawyers. “It was meant to be a nerdy sideline,” he says. “But it snowballed.” Turns out, lawyers love fonts. He turned the website into a shockingly popular book of the same name, which he published in 2010. Courts and private firms across the country started using his typefaces. After adopting his Equity font, a Fifth Circuit judge praised it as a fully-loaded F-150 compared to the Buick that was Times New Roman. “The stuff of finicky opinion-readers’ dreams,” the judge wrote.

Joseph Saveri was another such font-loving litigator. “These AI cases would not be happening if we weren’t mutual admirers of typography,” Butterick says. In 2012 the pair struck up an email friendship when Saveri emailed for advice on how to format pleadings. Saveri took on cases that Butterick admired, like a successful class action against Meta on behalf of content moderators.

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

Sometimes Saveri read Butterick’s personal blog, an eclectic mix of gift guides and acerbic musings about tech and type. After he read a ranty, passionate post Butterick wrote shortly after GitHub released its Copilot AI tool in summer 2022, arguing that it stole work from programmers, Saveri reached out and proposed a collaboration. Butterick wasn’t the only person upset about Copilot, after all. “I was one of the chorus,” Butterick says. “The difference was that I had a law degree that I could reactivate.” In November 2022, they filed their first complaint.

Around the time Butterick and Saveri were prepping their GitHub lawsuit, artist Karla Ortiz was having her own AI revelation. As she worked from her loft in San Francisco, she began researching how image generators were trained. She was disturbed by what she found. “It all felt really exploitative and gross,” she says. “It’s existential. I feel there’s no choice but to stand up and be loud.”

Ortiz works in several disciplines, including fine art, but many of her higher-profile projects are conceptual art and illustration for entertainment and gaming outfits like Marvel, Ubisoft, and Universal. For working artists, this type of job is often what pays the bills. It’s also the type of job artists fear is most threatened by the AI revolution. Ortiz has seen pitch work, where artists are hired to create visuals to help sell ideas to investors, dry up. “That work was bread-and-butter for so many of us,” she says. “It’s completely gone.”

Along with a few friends, Ortiz started contacting lawyers to see if anyone might represent artists who wanted to sue AI companies for infringing on their rights. They got a cascade of noes. Then she read about the Copilot case. She emailed Saveri’s firm; for once, instead of dismissal, she was met with enthusiasm. She felt Butterick actually got what she was going through. “He understands artists,” she says. She knew she’d found her lawyers.

In January 2023, Ortiz and two other artists became the named plaintiffs in Butterick and Saveri’s second class action, this one filed against Stability AI, Midjourney, and DeviantArt. In July 2023, Butterick and Saveri filed two additional class-action cases, this time on behalf of writers. (Two against OpenAI, which will be consolidated into a single case, and one against Meta.) People started paying attention to what they were up to. Ortiz appeared in front of the US Senate Judiciary Subcommittee on Intellectual Property, talking about AI and copyright alongside copyright experts; she has emerged as one of the most prominent artists advocating for changes to AI training.

By that point, the backlash against generative AI’s impact on the arts was in full swing. As AI tools exploded in popularity, concerns about what they’d do to creative industries grew too. In Hollywood, the Writers Guild of America and the Screen Actors Guild-American Federation of Television and Radio Artists went on overlapping strikes, both agitating for guardrails and rules about the use of AI in their fields. In just about every creative industry you can think of—filmmaking, podcasting, music, translation, journalism, graphic design, copywriting, even, yes, typography—movements critical of both the training methods and economic impact of AI have arisen. More than 15,000 writers, organized by the Authors Guild, signed a letter to generative AI companies asking for compensation and a licensing solution for the use of training data.

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

“Publishers, authors, visual artists, software developers, the recording industry—everybody’s angry,” Pam Samuelson, a copyright lawyer who codirects the Berkeley Center For Law & Technology, says. “Some other technologies have been upsetting to one group of people. Now, everybody’s upset.”

Before Butterick and Saveri filed the GitHub lawsuit, the only other major case involving AI training data working its way through the courts was a lawsuit from before the generative AI boom. In 2020, media company Thomson Reuters sued an artificial-intelligence research firm called Ross Intelligence. In addition to owning the news agency Reuters, Thomson Reuters also owns the legal research firm Westlaw. It alleges that Ross Intelligence trained its AI tool on Westlaw’s legal summaries without first properly licensing them. (That dispute is set to go to trial next spring, which means that it may end up being the first precedent-setting case in this field.)

While the Thomson Reuters case might’ve seemed like a one-off, now AI companies are well aware that they’re vulnerable to litigation, because the lawsuits just keep coming.

At the beginning of the year, Getty Images sued Stability AI in both the US and the UK. In September, two additional groups of writers filed complaints against OpenAI. One group included George R. R. Martin, John Grisham, and Jonathan Franzen; the other counted Michael Chabon, Ayelet Waldman, and David Henry Hwang among its plaintiffs. In October, a group of Christian authors, including former governor of Arkansas Mike Huckabee, filed yet another suit. (“Imitation is the sincerest form of flattery,” quips Saveri.) That same month, a group of music labels including Universal Music Group sued Anthropic for using their copyrighted materials in its training data, claiming that Anthropic’s AI unlawfully distributed lyrics in its output. Just this week, a firm representing a group of nonfiction writers filed a proposed class-action suit against OpenAI and Microsoft.

If it wasn’t official when Butterick and Saveri filed their first few cases, it is now: This is an all-out IP War.

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

Copyright law can be a sleepy discipline. But every so often, moments like this come along, when a new technology rapidly upends cultural norms. In the late 1890s, that technology was the piano roll, which allowed player pianos to produce music without humans touching the keys. (Several experts I talked to for this story brought up piano rolls as a classic copyright and tech case study.) Piano rolls amazed audiences and gave pianists a new way to record their performances—but they also terrified live performers, who worried that their careers were in jeopardy, and enraged music publishers, who sued piano roll makers for copyright infringement. (Sound familiar?) They lost. This time around, nothing is certain. But copyright experts see this as a tough fight, one unlikely to resolve neatly or without bruising.

“Right now, we’re in one of those classic legal disputes in an area of new technology where parties are fighting over metaphors,” says James Grimmelmann, a professor of internet law at Cornell University. “It’s about telling a compelling story about how AI works.”

For AI companies, that means leaning on ideas about how their tools simply train on art, as humans do—like an apprentice painter might train on the Old Masters. When AI companies use metaphors to describe how AI trains, they sometimes compare it to studying art. (GitHub, Meta, OpenAI, and Stable Diffusion declined to comment on this story.) For their opponents, that means framing AI training as a robotic scourge, grimy algorithmic extraction disguised as innovation—corporations automating the act of stealing art.

Grimmelmann has been impressed by how Butterick and Saveri are telling their side of the story. “Among the best-formatted complaints I’ve seen,” he says. In his eyes, they’ve tapped into the emotional turmoil creative people feel when they realize that these shiny new tools have been built atop their labor, sometimes generating work that competes with their own. But he’s also watched how the pair have already run into their first obstacles.

This October, US district judge William Orrick trimmed down the artists’ Stability AI case, dismissing a number of the claims. He did, however, allow the plaintiffs time to amend all of them, which means they’ll be able to essentially rework the claims and turn in another draft later this year. In November, something similar happened in the Meta case. US district judge Vince Chhabria said he would grant Meta’s motion to dismiss allegations that text generated by its AI, Llama, violated writers’ copyright, but as with the artists’ case, Butterick says, the judge signaled from the bench that the plaintiffs would be able to amend. Notably, the central claim in the case—that the use of training data itself is an infringement—was not challenged by Meta, and it is going forward. “So far, it doesn’t seem like they’ve had success in convincing judges to wholesale adopt their framing. But the judges remain open to it, I think,” Grimmelmann says.

Judge Chhabria was quick to point out that he wasn’t buying the claims he dismissed as they were currently presented, though. He told Butterick and Saveri that he understood their core theory, but not the others: “Your remaining theories of liability I don't understand even a little bit.”

He was especially critical of the theory that Llama—the tool as a whole—is an infringing work. “That makes my head explode when I try to understand that.”

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

Many copyright experts are skeptical too. Berkeley’s Samuelson, for example, says it’s “ridiculous” to claim everything an AI outputs is infringing by default because of how it is trained. She also has minimal patience for arguments that copyright law is meant to shield creative fields from change. “Copyright isn't a jobs program.”

There are other objections too. The US Copyright Office is currently accepting comments for its study on AI, and many of those submitted reveal starkly different schools of thought on whether AI training violates copyright. The Copia Institute, a think tank founded by Techdirt founder Mike Masnick, argues bluntly that artists have no right to exclude their work from AI training, because it views said training as more akin to reading than copying. “Ultimately when we speak of training AI we are speaking about letting software ‘read’ or otherwise consume works on behalf of the people developing it. Copyright law does not prevent reading,” its comment states. If the courts adopt this line of thinking—the training-as-consuming metaphor—it will be difficult to sway them with arguments founded on the idea of training as theft.

Most experts aren’t keen to try to predict exactly how these cases will shake out. Consensus is strong, though, about how the AI companies are likely to defend themselves: with the fair use doctrine, a common shield against accusation of copyright infringement. Some of the defendants are already talking about it; Midjourney, for example, cited “obvious fair use defenses” in its motion to dismiss Ortiz’s case.

In the US, “fair use” means that copyright infringement did take place, but it’s permitted, to promote creativity and freedom of expression. It’s a broad, fuzzily defined, and absolutely essential doctrine. It’s why satirists can publish twists on books and movies, why academics can include excerpted passages in their own work, and it’s why you’re allowed to record a TV show or film and watch it later.

Fair use has been embraced by Big Tech with gusto. In 2013, facing a lawsuit from the Authors Guild, Google successfully argued that it was kosher to copy millions of books and upload snippets of them online, because it was creating a searchable index for the public. (The next year, another judge reached a similar verdict about another book-scanning project, the HathiTrust Digital Library.)

But, again, fair use is a nebulous concept. “Early on, we heard from opponents that the Authors Guild v. Google case would be determinative,” Butterick says. If the courts said Google could do it, why couldn’t they scrape millions of books too? He’s unconvinced. “The point of the Google Books project was to point you to the original books, right? ‘Here's where you can find this book in a library.’ Generative AI doesn't do any of that. It doesn't point you to the original work. The opposite—it competes with that work.”

He’s more enthused about a recent US Supreme Court decision: Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith. It’s a convoluted case but an important one. The backstory starts in 1981, when Vanity Fair licensed a photograph of Prince taken by Lynn Goldsmith so that Andy Warhol could use it as a reference. Warhol created a silkscreen version of the Goldsmith photo, which Vanity Fair then used for its cover, crediting Goldsmith. (Like WIRED, Vanity Fair is owned by Condé Nast.)

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

Later, Warhol made an additional 15 images based on Goldsmith’s photo, calling it the “Prince series.” After Prince died in 2016, Condé Nast licensed an image from that series from the Andy Warhol Foundation so it could use it on a special-edition magazine. Goldsmith was not credited or paid. When she contacted the Warhol Foundation to object, it preemptively sued her. She countersued for copyright infringement. Last May, the Supreme Court ruled in Goldsmith’s favor, holding that the purpose and character of Warhol’s work was too similar to Goldsmith’s, since both ended up licensed to magazines.

Although the Warhol case didn’t involve AI, it has major implications for all of the current cases, as it showed that the courts are willing to interpret fair use more narrowly than they have in the past. One of the points that AI critics return to frequently is how human creators get a raw deal when AI tools are allowed to train on their work, in part because they deserve to have a choice in the matter—but also because the generators then churn out pastiches that can be used as substitutions for the original work. Warhol v. Goldsmith shows that focusing on the competition angle could pay off.

Right now, though, Butterick is focused on amending the complaints for the Stability AI and Meta cases, and continuing to prepare for the other two lawsuits against Github and OpenAI. This kind of litigation can seem painfully slow. Although the amended complaint in the Stability AI case is due this month, the judge won’t actually rule on it until April, and discovery is paused in the interim.

This slowness is a liability for the plaintiffs. “The passage of time has been good for the AI companies,” Grimmelmann says. These AI tools are growing more popular with each passing week. The novelty is wearing off. As these products get woven into daily life, it may make judges wary of decisions that would force AI companies to scrap their current offerings. Butterick and Saveri know this—it’s why they decided to take the risk to be first to file this type of lawsuit. The AI companies have been very successful in peddling the narrative that it’s naive and a little ridiculous to try and stop them.

But they are not—not yet, anyway—impervious. The downsides for AI companies could be catastrophic if they were to lose big. If a court finds that their training is illegal and requires something like algorithmic disgorgement—basically, starting from scratch and deleting the infringing data sets—it would be disastrous. So would some licensing methods. “If they need to individually negotiate permission from each copyright owner or they get hit with large damages, that’s untenable for them,” Grimmelman says.

Both Butterick and Saveri see licensing as a potential compromise. “I don't think anybody thinks that it's an all-or-nothing solution—either they do whatever they want or there's no AI,” Saveri says. “Maybe we're in the Napster period, and something that feels like Spotify will emerge.”

Grimmelmann believes there may be a future in which a “practical détente” is reached between major copyright holders and AI companies, where record labels, media companies, or publishers get compensated for providing “enhanced access” to their content, including high-quality metadata.

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

And, of course, the AI companies could always settle these cases, avoiding the worst-case scenarios that could wholly hobble them. But some experts, like UCSF copyright law professor Ben Depoorter, suspect these companies might be ready to take their chances. “I don’t expect settlements,” Depoorter says. Instead, he sees a “fight to the bitter end.”

Then again, this matter may not be settled through the courts alone. Public opinion matters. Remember the piano roll? Back in 1908, the Supreme Court decided that piano roll makers did not have to pay royalties to composers, striking a blow against artists who hoped to use copyright law to stop new technology from building atop their work without permission. But that wasn’t the end of the story. A year later, after major backlash to the ruling, Congress introduced the Copyright Act of 1909, which both undercut the original ruling that a mechanical copy didn’t count as a copy and mandated that songs copied be properly licensed.

Even if tech companies win some of the legal battles, the story that Butterick and his clients are telling about artists and AI could nevertheless influence the larger war. Already, there are signs that their ideas about fair use are gaining traction. This month, a Stability AI executive resigned because he disagreed with the company’s insistence that building models off unlicensed copyrighted training data was fair use. In his resignation, he specifically noted that AI music can compete with music made by humans.

When asked if he’s optimistic about the future of AI, Butterick takes a longer view. “I’m just one piece of this—I don’t want to call it a campaign against AI, I want to call it the human resistance,” he says. “And it’s going to be worldwide. We’ve now talked to lawyers in Sweden, Denmark, Germany, and Australia. It’s happening everywhere.” He sees his role as a public gadfly as deeply connected to his background in the arts.

“One of the interesting features of typography and design that gets overlooked by casual observers is the degree to which a lot of the major figures have been shit-stirrers,” he says. William Morris, for example. The leader of the Arts and Crafts movement is best known for his intricate wallpaper designs, but he was also a typographer—and one possessed of fervent beliefs about the role technology should play in artistry. Like Butterick, he was both open to experimentation with new tools and deeply wary of mechanization. Morris’ acolyte Charles Robert Ashbee, another major designer, summed up their shared approach in 1894, when he wrote: “We do not reject the machine, we welcome it. But we would desire to see it mastered.”

Nearly 200 years later, Butterick’s goals echo this wish. At a moment when the world is dazzled by what machines are making, he wants to remind us that people deserve to determine the endings to their own stories.

About Kate Knibbs

Check Also

‘The Legend of Zelda: Echoes of Wisdom’ Trades Tropes for New Tricks

Nintendo, after nearly four decades, is taking The Legend of Zelda somewhere new. With Echoes …

Leave a Reply