As ChatGPT’s first birthday approaches, presents are rolling in for the large language model that rocked the world. From President Joe Biden comes an oversized “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” And UK prime minister Rishi Sunak threw a party with a cool extinction-of-the-human-race theme, wrapped up with a 28-country agreement (counting the EU as a single country) promising international cooperation to develop AI responsibly. Happy birthday!
Before anyone gets too excited, let’s remember that it has been over half a century since credible studies predicted disastrous climate change. Now that the water is literally lapping at our feet and heat is making whole chunks of civilization uninhabitable, the international order has hardly made a dent in the gigatons of fossil fuel carbon dioxide spewing into the atmosphere. The United States has just installed a climate denier as the second in line to the presidency. Will AI regulation progress any better?
There’s some reason to think so. In contrast to the climate issue, where a multitrillion-dollar industry mounted an all-out campaign to discredit the threats and thwart necessary measures to cut carbon, the big powers of AI seem to be begging for regulation. They surely have their own interests at heart, but at least there’s an acceptance that rules are needed. Also, unlike the case with the climate, governments are taking seriously the threats to AI relatively early in the technology’s development. Both the Biden plan and the international agreement represent commendably serious efforts to handle AI before it handles us.
Given that, it almost seems petty to nitpick about the actual content. But I will anyway. Let’s start with Biden's executive order. I read all 19,811 words of government-speak, so you won’t have to. By the end, I was jonesing for Dramamine. How does the president intend to encourage the benefits of AI while taming its dark side? By unleashing a human wave of bureaucracy. The document wantonly calls for the creation of new committees, working groups, boards, and task forces. There’s also a consistent call to add AI oversight to the tasks of current civil servants and political appointees.
Among the things the document lacks are a firm legal backing for all the regulations and mandates that may result from the plan: Executive orders are often overturned by the courts or superseded by Congress, which is contemplating its own AI regulation. (Although, don’t hold your breath, as a government shutdown looms.) And many of Biden’s solutions depend on self-regulation by the industry that’s under examination—whose big powers had substantial input into the initiative.
You can’t fault Biden’s order for a lack of breadth. Pretty much every AI hot button is dealt with in some way, if only to make a vow to come up with solutions later. (That’s how it handles the tricky issue of generative AI and copyright.) Overall, it’s a stunning commitment to mobilize government bureaucracy to grapple with every worrisome aspect of a new class of technology, including ones most of us never thought of. In paragraph after subparagraph, the White House orders up complicated multi-agency studies, each one involving deep interaction with industry and consultation with experts. Biden’s order assigns bureaucrats to produce complicated reports as casually as some order DoorDash meals.
Most PopularPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric Ravenscraft Gear13 Great Couches You Can Order OnlineBy Louryn Strampe GearThe Best Portable Power StationsBy Simon Hill GearThe Best Wireless Earbuds for Working OutBy Adrienne So
GearThe Department of Homeland Security will start an Artificial Intelligence Safety and Security Board; the Department of Health and Human Services will organize an AI Task Force; the director of the Office of Management and Budget will convene an interagency council to coordinate the use of AI. And so on. It’s going to be a beast to make sure that the groups including outsiders won’t be plagued by conflicts of interest. (The people serving on boards and committees consisting only of government people will have to wait until later to take jobs in the DC offices of AI powers.) The mother of all AI working groups will be the White House AI Council, including over 30 bureaucratic bigwigs, from the Chairman of the Joint Chiefs of Staff to the Director of the Gender Policy Council. This is going to be a bonanza for Zoom!
Reading the order, one might think that for the next year, half of the executive branch will be buried in AI homework. Before this year is out, everyone from the Attorney General to the Secretary of Agriculture will need Sam Altman on speed dial to fill in some blanks. I counted the deadlines for the various reports and tasks Biden has ordered up. Two must be completed in 30 days, six are due within 60 days, five within 45 days, 11 in 90 days, seven in 120 days, three in 150 days, 38 in 180 days, 11 in 270 days, and 11 more within 365 days. To spare you the math, I totaled up all those big asks: 94. And there’s a whole bunch of other reports and tasks required without deadlines. One typical example: The chairman of the Council of Economic Advisors must prepare and submit what is presumably a thesis-level report on “labor-market effects of AI.” Sounds like a job for GPT-4! Some of the requests are vague, like “encouraging” the Federal Communications Commission and the Federal Trade Commission to consider a laundry list of actions. But every single one will require tedious execution. including meetings, drafts, interviews, consultations with industry leaders and academics, and last-minute ass-covering.
It’s not clear the people currently on government payrolls are up to these assignments. Silicon Valley is desperate for AI talent, and fiercely competes to recruit it. To perform some of the highly technical tasks requested—like vetting “frontier” LLMs even more powerful than the current mindblowing chatbots—will require A-level programmers familiar with red-teaming, eliminating bias from datasets, and, as the order puts it, the mechanics of “machines physically located in a single datacenter, transitively connecting by data center networking over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI.” That’s a big ask for the government. The Biden order also instructs every big agency to designate a Chief Artificial Intelligence officer within 60 days—a requirement handed down by a White House that for three years has failed to fill its vacancy for a Chief Technology Officer.
Most PopularPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric Ravenscraft Gear13 Great Couches You Can Order OnlineBy Louryn Strampe GearThe Best Portable Power StationsBy Simon Hill GearThe Best Wireless Earbuds for Working OutBy Adrienne So
GearThe order does mandate that the official website AI.gov should devote some pages to recruitment. The front page urges visitors to “join the national AI talent surge.” But even the snappiest memes might have difficulty snaring AI-trained recent graduates considering offers of high six-figure salaries from Google or OpenAI. One excellent idea in the EO suggests changing immigration policy to remove current hurdles for AI talent seeking to work in the US. But I suspect that those opposed to any exceptions that increase immigration—that is, every Republican—might push back on this. Maybe, like other presidential mandates on immigration, it will be challenged in court. Jennifer Pahlka, who helped create the US Digital Service, has written that in order to fill the sudden need for AI experts, the government should simply overhaul its archaic hiring practices. “AI is about to hit us like a Mac truck,” she writes, “We need a civil service that works, and we need it now.” It’s unlikely that the overhaul she suggests will occur in time to meet all those 60, 90, or even 270-day deadlines.
In contrast to the thick, detailed to-do list that is the Biden executive order, Rishi Sunak’s Bletchley Declaration comes off as an expression of good intentions. The achievement isn’t specifying any action to be taken but getting all those countries to put their signature on a single statement before going home. Many of the individual signers, notably the EU and China, are well along on their journey to regulate AI, but as a united entity, the international community is still at the starting gate. In less than 1,200 words—shorter than this essay—the declaration acknowledges the promise and risk of AI, and cautions people building it to do it responsibly. Of course, Google, Microsoft, and the rest will tell you they already are. And the lack of specifics seems to contradict the declaration’s premise that the situation is urgent. “There is potential for serious, even catastrophic harm” from AI models, it says, apparently referring to human extinction. But issues including bias transparency, privacy, and data are also acknowledged, and the signatories “affirm the necessity and urgency of addressing them.” The only deadline in this document, however, is a promise to meet again in 2024. By then, the Biden administration will be waist deep in reports, interagency committees, and recruiting efforts. Meanwhile, nothing in either document seems likely to impede AI from getting more powerful and useful, or potentially more dangerous.
Time Travel
The struggle to contain AI while reaping its benefits has been going on for decades. I pondered this dialectic when writing my curtain-raiser to the now-famous match between chess champion Garry Kasparov and IBM’s Deep Blue computer in May 1997. Newsweek’s cover line ran, “The Brain’s Last Stand.”
There's a deep irony in this epochal clash between cell and circuitry. Deep Blue is a machine, but its training consists of programming and chess lessons from impassioned human beings. Kasparov is the standard-bearer for humankind, but he's sparring against a computer running a sophisticated program. The preparations on both sides mirror the relationship that all of us have developed with the silicon interlopers into domains we once controlled. We're not competing but collaborating. If computers were yanked from our presence, planes would be grounded, cars would stall, phones would go dead, cash registers would fall silent, printing presses would stop and the bull market would be hamburger. Silicon is our ultimate prosthesis; the industrialized world is a cyborg culture, and much of humanity's intelligent work is performed, however uneasily, with our digital companions. Computers and people are in this together. At least for now.
Most PopularPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric Ravenscraft Gear13 Great Couches You Can Order OnlineBy Louryn Strampe GearThe Best Portable Power StationsBy Simon Hill GearThe Best Wireless Earbuds for Working OutBy Adrienne So
GearThat's why we approach this week's historic matchup with more than a little trepidation. The terms of the partnership are subject to change. We humans may be cagier, but they are getting smarter by quantum leaps and bounds. And there's one word that we dread hearing from our silicon counterparts: checkmate.
Ask Me One Thing
Gabor asks, “What share of news and newsletters will be written by LLMs a year from now and how will that change the news industry?”
Thanks, Gabor. Glad I grabbed this question from you before T2/Pebble sadly closed.
Now for your answer: I don’t know whether AI will dramatically change the journalistic landscape in a year. I have no doubt that percentage-wise, plenty of AI-generated articles will appear, but most of what people actually read will still be produced by humans. That’s because right now, the output from ChatGPT and others doesn’t approach what even an average journalist can produce. When publications try to get away with using AI content, it’s usually subpar. And using AI content is going to royally tick off the real journalists who work for a publication.
In the mid-term, maybe three to five years out, I think that a combination of better LLMs and smarter approaches to using them will probably change that equation somewhat, particularly those clickbaity articles with tempting headlines. I wouldn’t be surprised if generative AI actually becomes more effective at luring people to click on those bottom-feeder stories than even the most cynical copy-churners and headline writers. But that won’t affect higher-quality outlets. Right now the winners in the news industry seem to be publications where high-quality reporting and writing wins subscribers, as opposed to those that simply court traffic.
Long-term, the answer is fuzzier. I simply don’t know how good those systems will get at producing journalism. Ten or 20 years from now, Plaintext may well be written by one of those advanced AI scribes. Would people notice? God, I hope so.
Most PopularPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric Ravenscraft Gear13 Great Couches You Can Order OnlineBy Louryn Strampe GearThe Best Portable Power StationsBy Simon Hill GearThe Best Wireless Earbuds for Working OutBy Adrienne So
GearYou can submit questions to mail@wired.com. Write ASK LEVY in the subject line.
End Times Chronicle
Meet the kilonova—a collision of two neutron stars. If one of these happens within 36 light years of Earth, all life will be eradicated for thousands of years. At least we wouldn’t have to worry about AI.
Last but Not Least
Here’s another overview of Biden’s executive order.
WIRED’s reporting from London on the UK’s AI Safety Summit
The director of Deadpool and A Night at the Museum weighs in on streaming and Taylor Swift.
Most PopularPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric Ravenscraft Gear13 Great Couches You Can Order OnlineBy Louryn Strampe GearThe Best Portable Power StationsBy Simon Hill GearThe Best Wireless Earbuds for Working OutBy Adrienne So
GearIt’s better in the Bahamas—now that Sam Bankman-Fried is gone.