Elon Musk Announces Grok, a ‘Rebellious’ AI With Few Guardrails

Last week, Elon Musk flew to the UK to hype up the existential risk posed by artificial intelligence. A couple of days later, he announced that his latest company, xAI, had developed a powerful AI—one with fewer guardrails than the competition.

The AI model, called Grok (a name that means “to understand” in tech circles), “is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humor!” reads an announcement on the company’s website. “It will also answer spicy questions that are rejected by most other AI systems.”

The announcement does not explain what a “spicy” or “rebellious” means, but most commercial AI models will refuse to generate sexually explicit, violent, or illegal content, and they are designed to avoid expressing biases picked up from their training data. Without such guardrails, the worry is that an AI model could help terrorists develop a bomb or could result in products that discriminate against users based on characteristics such as race, gender, or age.

xAI does not list any contact information on its website, and emails sent to common addresses bounced back. An email sent to the press address for X received an automated response reading, “Busy now, please check back later.”

The xAI announcement says that Grok is built on top of a language model called Grok-1 that has 33 billion parameters. The company says it developed Grok in two months, a relatively short amount of time by industry standards, and also claims that a fundamental advantage is its “real-time knowledge of the world via the X platform,” or the platform formerly known as Twitter, which Musk acquired for $44 billion in 2022.

Stella Biderman, an AI researcher with EleutherAI, an open source AI endeavor, says the claims made in the xAI announcement seem plausible. Biderman suggests that Grok will use what’s known as “retrieval augmented generation” to add up-to-date information from X to its output. Other cutting-edge language models do this using search engine results and other information.

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

Large language models have proven stunningly capable over the past year or so, as highlighted most famously by OpenAI’s groundbreaking chatbot, ChatGPT.

These models feed on huge amounts of text taken from books and the web, and then generate text in response to a prompt. They are typically also given further training by humans to make them less likely to produce offensive, rude, or dangerous outputs, and to make them more likely to answer questions in ways that seem coherent and plausibly correct, although they are still prone to producing errors and biases.

The language models developed by OpenAI, Google, and startups like Anthropic, Cohere, and Inflection AI typically refuse to, for example, offer advice on how to commit crimes, and will demure when asked for racy material.

It is unclear from the xAI announcement whether Grok has been trained to be more open to requests deemed inappropriate by other models, or whether it simply has not received the same kind of secondary training.

xAI posted the results of several benchmark tests designed to gauge the capabilities of large language models. Andrei Barbu, a research scientist at MIT, says the results seem similar to other popular models.

xAI says that Grok has so far been tested by a small number of users but will now be made available to a wider group of people who apply for access. Musk said in a post on X that the model would be made available to all X Premium+ subscribers. xAI has not said that it will release any models publicly.

The announcement for xAI says that the company is working on several key challenges involved in advancing AI, including building models that can assess the reliability of their own output and ask for assistance when necessary, and making models that are more robust to “adversarial attacks” designed to make them misbehave. It states: “we will work towards developing reliable safeguards against catastrophic forms of malicious use.”

Musk was an early investor in generative AI. The billionaire helped OpenAI get its start by investing between $50 and $100 million in the company in 2015. He pulled his support for the company (which at the time was a nonprofit) in 2018 after failing to take control of it.

After OpenAI changed from a nonprofit to a for-profit business and accepted investment from Microsoft—and following ChatGPT’s runaway success—the world’s richest man openly criticized OpenAI and accused its language models of being overly “woke.”

Musk then announced in July 2023 that he had put together a small but well-respected team of AI researchers to develop “less biased" forms of AI.

Some AI researchers have already tried building language models with a more diverse range of political opinions. OpenAI has also said that it will work to remove political biases from its models.

In the year since Musk took control of X, the platform has reinstated a number of controversial users, including those from the far right, in line with its new owner’s stated opposition to moderating content on social media. Multiple studies have found that disinformation has increased on the platform since Musk took over.

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

The Grok announcement states: “We are building AI tools that maximally benefit all of humanity. We believe that it is important to design AI tools that are useful to people of all backgrounds and political views. We also want to empower our users with our AI tools, subject to the law. Our goal with Grok is to explore and demonstrate this approach in public.”

Another AI researcher who is familiar with xAI’s plans, but did not want to be named because discussions regarding the company were private, said that xAI is also working on a coding tool that will use the model. Microsoft offers a plug-in for its coding program, Visual Studio Code, that is powered by a coding-specific language model developed by OpenAI. Training a model on computer code can improve its performance at other tasks.

WIRED contacted several prominent AI researchers but was unable to find any who have access to Grok as yet.

Barbu at MIT says that with so many good language models out there now, the question is what Musk plans to do with Grok, and how he hopes it will make money. “They’ve assembled good people and it seems like they’re now poised to do something,” he says. “It’s just not clear what.”

About Will Knight

Check Also

The Hottest Startups in Dublin in 2024

Thanks to low corporation tax and government incentives, Dublin has hosted the European headquarters of …

Leave a Reply