AI Can Be an Extraordinary Force for Good—if It’s Contained

In a quaint Regency-era office overlooking London’s Russell Square, I cofounded a company called DeepMind with two friends, Demis Hassabis and Shane Legg, in the summer of 2010. Our goal, one that still feels as ambitious and crazy and hopeful as it did back then, was to replicate the very thing that makes us unique as a species: our intelligence.

To achieve this, we would need to create a system that could imitate and then eventually outperform all human cognitive abilities, from vision and speech to planning and imagination, and ultimately empathy and creativity. Since such a system would benefit from the massively parallel processing of supercomputers and the explosion of vast new sources of data from across the open web, we knew that even modest progress toward this goal would have profound societal implications.

It certainly felt pretty far-out at the time.

But AI has been climbing the ladder of cognitive abilities for decades, and it now looks set to reach human-level performance across a very wide range of tasks within the next three years. That is a big claim, but if I’m even close to right, the implications are truly profound.

Further progress in one area accelerates the others in a chaotic and cross-catalyzing process beyond anyone’s direct control. It was clear that if we or others were successful in replicating human intelligence, this wasn’t just profitable business as usual but a seismic shift for humanity, inaugurating an era when unprecedented opportunities would be matched by unprecedented risks. Now, alongside a host of technologies including synthetic biology, robotics, and quantum computing, a wave of fast-developing and extremely capable AI is starting to break. What had, when we founded DeepMind, felt quixotic has become not just plausible but seemingly inevitable.

As a builder of these technologies, I believe they can deliver an extraordinary amount of good. But without what I call containment, every other aspect of a technology, every discussion of its ethical shortcomings, or the benefits it could bring, is inconsequential. I see containment as an interlocking set of technical, social, and legal mechanisms constraining and controlling technology, working at every possible level: a means, in theory, of evading the dilemma of how we can keep control of the most powerful technologies in history. We urgently need watertight answers for how the coming wave can be controlled and contained, how the safeguards and affordances of the democratic nation-state, critical to managing these technologies and yet threatened by them, can be maintained. Right now no one has such a plan. This indicates a future that none of us want, but it’s one I fear is increasingly likely.

Facing immense ingrained incentives driving technology forward, containment is not, on the face of it, possible. And yet for all our sakes, containment must be possible.

It would seem that the key to containment is deft regulation on national and supranational levels, balancing the need to make progress alongside sensible safety constraints, spanning everything from tech giants and militaries to small university research groups and startups, tied up in a comprehensive, enforceable framework. We’ve done it before, so the argument goes; look at cars, planes, and medicines. Isn’t this how we manage and contain the coming wave?

If only it were that simple. Regulation is essential. But regulation alone is not enough. Governments should, on the face of it, be better primed for managing novel risks and technologies than ever before. National budgets for such things are generally at record levels. Truth is, though, novel threats are just exceptionally difficult for any government to navigate. That’s not a flaw with the idea of government; it’s an assessment of the scale of the challenge before us. Governments fight the last war, the last pandemic, regulate the last wave. Regulators regulate for things they can anticipate.

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

Despite the headwinds, efforts to regulate frontier technologies are growing. The most ambitious legislation is probably the EU’s AI Act, first proposed in 2021. As of this writing, the act is going through the lengthy process of becoming European law. It categorizes AI research and deployment on a risk-based scale. Technologies with “unacceptable risk” of causing direct harm will be prohibited. Where AI affects fundamental human rights or critical systems like basic infrastructure, public transport, health, or welfare, it will get classed as “high risk,” subjected to greater levels of oversight and accountability. High-risk AI must be “transparent, secure, subject to human control and properly documented.”

The AI Act, although one of the world’s most advanced, ambitious, and farsighted regulatory attempts to date, also demonstrates the inherent problems with regulation. It has been attacked from all sides, for going too far and not going far enough. Some argue it’s too focused on nascent, future-facing risks, trying to regulate something that doesn’t even exist; others that it’s not farsighted enough. Some believe it lets big tech companies off the hook, that they were instrumental in its drafting and watered down its provisions. Others think it overreaches and will chill research and innovation in the EU, hurting jobs and tax revenues.

Most regulation walks a tightrope of competing interests. But in few areas other than frontier technology must it tackle something so widely diffused, so critical to the economy, and yet so fast evolving. All the noise and confusion makes clear how hard and complex any form of regulation is, especially amid accelerating change, and how, because of that, it will almost certainly leave gaps, falling short of effective containment.

Above the cut and thrust of legislative debate, nations are also caught in a contradiction. On the one hand, every nation wants to be, and be seen to be, at the technological frontier. It’s a measure of national pride, of national security, and an existential imperative. On the other hand, they’re desperate to regulate and manage these technologies—to contain them, not least for fear they will threaten the nation-state as the ultimate seat of power. The scary thing is that this assumes a best-case scenario of strong, reasonably competent, cohesive (liberal democratic) nation-states capable of working coherently as units internally and coordinating well internationally.

China is a regulatory leader of sorts. The government has issued multiple edicts on AI ethics, seeking to impose wide-ranging restrictions. It proactively banned various cryptocurrencies and DeFi initiatives, and it limits the time children under 18 can spend on games and social apps to 90 minutes a day during the week, three hours on the weekend. Draft regulation of recommendation algorithms and large language models in China far exceeds anything we’ve yet seen in the West.

But while China is slamming on the brakes in some areas, it is also charging ahead in others. Its regulation is matched by an unparalleled deployment of technology as a tool of authoritarian government power. Speak to Western defense and policy insiders and they’re adamant that, although China talks a good game on AI ethics and limitations, when it comes to national security, there are no meaningful barriers. In effect, Chinese AI policy has two tracks: a regulated civilian path and a freewheeling military-industrial one.

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

Regulation on its own likely won’t be enough to contain technology. This is not an argument against regulation, which is absolutely needed, but a necessary recognition of its limits. It doesn’t stop motivated bad actors or accidents. It doesn’t cut to the heart of an open and unpredictable research system. It doesn’t provide alternatives given the immense financial rewards on offer. And above all, it doesn’t mitigate strategic necessity. It doesn’t describe how countries might coordinate on an enticing, hard-to-define transnational phenomenon, building a delicate critical mass of alliances, especially in a context where international treaties all too often fail. There is an unbridgeable gulf between the desire to rein in the coming wave and the desire to shape and own it, between the need for protections against technologies and the need for protections against others. Advantage and control point in opposing directions.

The reality is that containment is not something that a government, or even a group of governments, can do alone. It requires innovation and boldness in partnering between the public and private sectors and a completely new set of incentives for all parties. Regulations like the EU's AI Act do at least hint at a world where containment is on the map, one where leading governments take the risks of proliferation seriously, demonstrating new levels of commitment and willingness to make serious sacrifices.

Regulation is not enough, but at least it’s a start. In a world where containment seems like it’s not possible, all of this gestures toward a future where it might be.

So, where do we go from here? How do we go beyond regulation? Does any entity have the power to prevent mass proliferation while capturing the immense power and benefits arising from the coming wave? To stop bad actors acquiring a technology, or shape the spread of nascent ideas around it? As autonomy increases, can anyone or anything really hope to have meaningful control at the macro level? Containment means answering yes to questions like these. In theory, it gets us away from technology’s most concerning impacts. It means at once harnessing and controlling the wave, a vital tool for building sustainable and flourishing societies, while checking it in ways that avoid risk of a serious catastrophe, but not so invasively as to invite a dystopia. It means writing a new kind of grand bargain.

It’s tempting to think of containment in an obvious, literal sense, a kind of magic box in which a given technology can be sealed away. At the outer limit—in the case of rogue malware or pathogens—such drastic steps might be needed. Generally, though, consider containment more as a set of guardrails, a way to keep humanity in the driver’s seat when a technology risks causing more harm than good. Picture those guardrails operating at different levels and with different modes of implementation. They need to be strong enough that, in theory, they could stop a runaway catastrophe. Containment will need to respond to the nature of a technology, and channel it in directions that are easier to control. It’s worth asking the following kinds of questions to prompt promising avenues:

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

Is the technology omni-use and general-purpose or specific? A nuclear weapon is a highly specific technology with one purpose, whereas a computer is inherently multi-use. The more potential use cases, the more difficult a technology is to contain. Rather than general systems, then, those that are more narrowly scoped and domain-specific should be encouraged.

Is the tech moving away from atoms toward bits? The more dematerialized a technology, the more it is subject to hard-to-control hyper-evolutionary effects. Areas like materials design or drug development are going to rapidly accelerate, making the pace of progress harder to track.

Are price and complexity coming down, and if so how fast? The price of fighter jets has not come down in the way the price of transistors or consumer hardware has. A threat originating in basic computing is of a wider nature than that of fighter jets, despite the latter’s obvious destructive potential.

Are there viable alternatives ready to go? CFCs could be banned partly because there are cheaper and safer alternatives for refrigeration. What alternatives are available? The more that safe alternatives are available, the easier it is to phase out use.

Does the technology enable asymmetric impact? Think of a drone swarm against the conventional military or a tiny computer or biological virus damaging vital social systems. Certain technologies are much more likely to exploit vulnerabilities.

Does it have autonomous characteristics? Is there scope for self-learning or operation without oversight? The more a technology by design requires human intervention, the less chance there is of losing control.

Does it confer outsize geopolitical strategic advantage? Chemical weapons, for example, have limited advantages and lots of downsides, whereas getting ahead in AI or bio has enormous upsides, both economic and military. Saying no is consequently harder.

Does it favor offense or defense? During World War II, the development of missiles like the V-2 helped offensive operations, while a technology like radar bolstered defense. Orienting development toward defense over offense tends toward containment.

Are there resource or engineering constraints on its invention, development, and deployment? Silicon chips require specialized and highly concentrated materials, machines, and knowledge. The talent available for a synthetic biology startup is, in global terms, still quite small. Both help containment in the near term.

Ask questions like these, and a holistic vision of containment begins to emerge. Little is going to be more important than fleshing out that vision—and making it a reality.


Excerpted from The Coming Wave by Mustafa Suleyman. Copyright © 2023 by Mustafa Suleyman. Excerpted by permission of Crown. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.

About Mustafa Suleyman,Michael Bhaskar

Check Also

Social Media Is Getting Smaller—and More Treacherous

In 2024, social media will get small. Not small in influence, of course. As the …

Leave a Reply