Open Source AI Has Founders—and the FTC—Buzzing

Y Combinator is famed for its Demo Days, where portfolio companies pitch their apps and wares in hopes of growing from a fledgling company into the next Airbnb. But on Thursday, the startup incubator hosted a mélange of founders, venture capitalists, and US policymakers in its airy industrial space in San Francisco to tackle a defining topic for so many startups today: AI as the latest frontier in the battle between Big Tech and the little guys.

For many early-stage tech entrepreneurs, questions about AI can carry existential weight. Ever since ChatGPT was unleashed in late 2022, OpenAI’s technology, along with fast follows from Google’s and Microsoft’s AI teams, has dominated the conversation in this new era of artificial intelligence. But the increasing availability—and potency—of open source AI models has the potential to upend those dynamics.

On Thursday, the enthusiasm for open source extended beyond just YC-backed founders who stand to benefit from a less expensive way to harness generative AI’s power. Lina Khan, the chairperson of the Federal Trade Commission, was one of the most prominent advocates for open source AI at the event.

Speaking to a crowd of approximately 200 entrepreneurs, Khan said it’s not an exaggeration to suggest that nearly all of Y Combinator's most successful companies wouldn’t exist without open source software and the community behind it. The FTC has been focused on defining and exploring open-weights AI models, which are slightly less “open” than fully open source AI models. With open-weights models available to them, “smaller players can bring their ideas to market,” Khan suggested.

Khan also clearly articulated the stakes for this particular crowd. “Conditions have allowed the biggest technology companies to get a leg up in the AI race,” she said. “If you control the raw materials, you can control the market and shut out smaller companies who don’t have the infrastructure to compete.”

Khan’s statements were made as part of a broader argument for fair and open competition in the tech industry, and in defense of the regulatory actions the FTC and the US Justice Department have taken against Big Tech companies over the past four years. US assistant attorney general Jonathan Kanter spoke Thursday at YC, too, emphasizing that the agencies are looking out for “little tech”—a phrase that would resonate with the YC crowd.

The appearance of two regulatory heavyweights before a community of this “move fast and break things” crowd might have seemed unlikely as recently as a year ago. Founded in 2005 by the entrepreneurs Paul Graham and Jessica Livingston, among others, YC is more famously known for the intense mentorship and bootcamp-like atmosphere it provides fledgling companies than for its ties to the DC establishment.

The shift is intentional. Last October, Y Combinator CEO Garry Tan brought on policy expert Luther Lowe to help facilitate conversations between YC and DC. The arrival of Lowe, who spent more than 15 years in public policy at Yelp and was one of Google’s most prominent critics, has clearly brought a certain polish and high-profile policy talk to YC events. Thursday marked the second time that FTC chairperson Khan has spoken to YC founders since Lowe joined.

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

Many of yesterday’s talks were littered with the acronyms you’d expect from this assemblage of high-minded panelists: YC, FTC, AI, LLMs. But threaded throughout the conversations—foundational to them, you might say—was boosterism for open source AI.

It was a stark left turn (or return, if you’re a Linux head) from the app-obsessed 2010s, when developers seemed happy to containerize their technologies and hand them over to bigger platforms for distribution.

The event also happened just two days after Meta CEO Mark Zuckerberg declared that “open source AI is the path forward” and released Llama 3.1, the latest version of Meta’s own open source AI algorithm. As Zuckerberg put it in his announcement, some technologists no longer want to be “constrained by what Apple will let us build,” or encounter arbitrary rules and app fees.

Open source AI also just happens to be the approach OpenAI is not using for its biggest GPTs, despite what the multibillion-dollar startup’s name might suggest. This means that at least part of the code is kept private, and OpenAI doesn’t share the “weights,” or parameters, of its most powerful AI systems. It also charges for enterprise-level access to its technology.

"With the rise of compound AI systems and agent architectures, using small but fine-tuned open source models gives significantly better results than an [OpenAI] GPT4, or [Google] Gemini. This is especially true for enterprise tasks,” says Ali Golshan, cofounder and chief executive of Gretel.ai, a synthetic data company. (Golshan was not at the YC event).

“I don’t think it’s OpenAI versus the world or anything like that,” says Dave Yen, who runs a fund called Orange Collective for successful YC alumni to back up-and-coming YC founders. “I think it’s about creating fair competition and an environment where startups don’t risk just dying the next day if OpenAI changes their pricing models or their policies.”

“That’s not to say we shouldn’t have safeguards,” Yen added, “but we don’t want to unnecessarily rate-limit, either.”

Open source AI models have some inherent risks that more cautious technologists have warned about—the most obvious being that the technology is open and free. People with malicious intent are more likely to use these tools for harm then they would a costly private AI model. Researchers have pointed out that it’s cheap and easy for bad actors to train away any safety parameters present in these AI models.

“Open source” is also a myth in some AI models, as WIRED’s Will Knight has reported. The data used to train them may still be kept secret, their licenses might restrict developers from building certain things, and ultimately, they may still benefit the original model-maker more than anyone else.

And some politicians have pushed back against the unfettered development of large-scale AI systems, including California state senator Scott Wiener. Wiener’s AI Safety and Innovation Bill, SB 1047, has been controversial in technology circles. It aims to establish standards for developers of AI models that cost over $100 million to train, requires certain levels of pre-deployment safety testing and red-teaming, protects whistleblowers working in AI labs, and grants the state’s attorney general legal recourse if an AI model causes extreme harm.

Wiener himself spoke at the YC event on Thursday, in a conversation moderated by Bloomberg reporter Shirin Ghaffary. He said he was “deeply grateful” to people in the open source community who have spoken out against the bill, and that the state has “made a series of amendments in direct response to some of that critical feedback.” One change that’s been made, Wiener said, is that the bill now more clearly defines a reasonable path to shutting down an open source AI model that’s gone off the rails.

The celebrity speaker of Thursday’s event, a last-minute addition to the program, was Andrew Ng, the cofounder of Coursera, founder of Google Brain, and former chief scientist at Baidu. Ng, like many others in attendance, spoke in defense of open source models.

“This is one of those moments where [it’s determined] if entrepreneurs are allowed to keep on innovating,” Ng said, “or if we should be spending the money that would go towards building software on hiring lawyers.”

About Lauren Goode

Check Also

The Hottest Startups in Helsinki in 2024

Helsinki’s startup scene evolved around behemoths such as Nokia, games giant Supercell, and food delivery …

Leave a Reply