Most tech moguls hope to sell artificial intelligence to the masses. But Mark Zuckerberg is giving away what Meta considers to be one of the world’s best AI models for free.
Meta released the biggest, most capable version of a large language model called Llama on Monday, free of charge. Meta has not disclosed the cost of developing Llama 3.1, but Zuckerberg recently told investors that his company is spending billions on AI development.
Through this latest release, Meta is showing that the closed approach favored by most AI companies is not the only way to develop AI. But the company is also putting itself at the center of debate over the dangers posed by releasing AI without controls. Meta trains Llama in a way that prevents the model from producing harmful output by default, but the model can be modified to remove such safeguards.
Meta says that Llama 3.1 is as clever and useful as the best commercial offerings from companies like OpenAI, Google, and Anthropic. In certain benchmarks that measure progress in AI, Meta says the model is the smartest AI on Earth.
“It’s very exciting,” says Percy Liang, an associate professor at Stanford University who tracks open source AI. If developers find the new model to be just as capable as the industry’s leading ones, including OpenAI’s GPT-4o, Liang says, it could see many move over to Meta’s offering. “It will be interesting to see how the usage shifts,” he says.
In an open letter posted with the release of the new model, Meta CEO Zuckerberg compared Llama to the open source Linux operating system. When Linux took off in the late '90s and early 2000s many big tech companies were invested in closed alternatives and criticized open source software as risky and unreliable. Today however Linux is widely used in cloud computing and serves as the core of the Android mobile OS.
“I believe that AI will develop in a similar way,” Zuckerberg writes in his letter. “Today, several tech companies are developing leading closed models. But open source is quickly closing the gap.”
However, Meta’s decision to give away its AI is not devoid of self-interest. Previous Llama releases have helped the company secure an influential position among AI researchers, developers, and startups. Liang also notes that Llama 3.1 is not truly open source, because Meta imposes restrictions on its usage—for example, limiting the scale at which the model can be used in commercial products.
The new version of Llama has 405 billion parameters or tweakable elements. Meta has already released two smaller versions of Llama 3, one with 70 billion parameters and another with 8 billion. Meta today also released upgraded versions of these models branded as Llama 3.1.
Llama 3.1 is too big to be run on a regular computer, but Meta says that many cloud providers, including Databricks, Groq, AWS, and Google Cloud, will offer hosting options to allow developers to run custom versions of the model. The model can also be accessed at Meta.ai.
Most PopularThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian Chokkattu CultureConfessions of a Hinge Power UserBy Jason Parham SecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty GearHow Do You Solve a Problem Like Polestar?By Carlton Reid
GearSome developers say the new Llama release could have broad implications for AI development. Stella Biderman, executive director of EleutherAI, an open source AI project, also notes that Llama 3 is not fully open source. But Biderman notes that a change to Meta’s latest license will let developers train their own models using Llama 3, something that most AI companies currently prohibit. “This is a really, really big deal,” Biderman says.
Unlike OpenAI and Google’s latest models, Llama is not “multimodal,” meaning it is not built to handle images, audio, and video. But Meta says the model is significantly better at using other software such as a web browser, something that many researchers and companies believe could make AI more useful.
After OpenAI released ChatGPT in late 2022, some AI experts called for a moratorium on AI development for fear that the technology could be misused or too powerful to control. Existential alarm has since cooled, but many experts remain concerned that unrestricted AI models could be misused by hackers or used to speed up the development of biological or chemical weapons.
"Cyber criminals everywhere will be delighted,” says Geoffrey Hinton, a Turing award winner whose pioneering work on a field of machine learning known as deep learning laid the groundwork for large language models.
Hinton joined Google in 2013 but left the company last year to speak out about the possible risks that might come with more advanced AI models. He says that AI is fundamentally different from open source software because models cannot be scrutinized in the same way. “People fine-tune models for their own purposes, and some of those purposes are very bad," he adds.
Meta has helped allay some fears by releasing previous versions of Llama carefully. The company says it puts Llama through rigorous safety testing before release, and adds that there is little evidence that its models make it easier to develop weapons. Meta said it will release several new tools to help developers keep Llama models safe by moderating their output and blocking attempts to break restrictions. Jon Carvill, a spokesman for Meta, says the company will decide on a case-by-case basis whether to release future models.
Dan Hendrycks, a computer scientist and the director of the Center for AI Safety, a nonprofit organization focused on AI dangers, says Meta has generally done a good job of testing its models before releasing them. He says that the new model could help experts understand future risks. “Today’s Llama 3 release will enable researchers outside big tech companies to conduct much-needed AI safety research.”