Containing Frankenstein's Monsters
Mustafa Suleyman's 'The Coming Wave: Technology, Power and the Twenty-First Century's Greatest Dilemma'
Coming from a science background for my philosophy degree, I was unafraid of symbolic logic, a mandatory component in our program. My ease with it almost led me to a PhD in developing logics for neural networks, an exciting interdisciplinary area of research back in the early 90s. A supervisor was arranged, and a plan was formulated, but then I took six months out in India and Nepal, changed direction, learned some Sanskrit and ended up with a PhD on the 4th Century Indian philosopher Vasubandhu instead.
I mention this because it places me in the slightly awkward position of now being immensely concerned about where we have ended up with AI, to the point of sometimes wishing we had seen the future in Frank Herbert's tales of the Butlerian Jihad in Dune and placed a moratorium on such research from the beginning. I might have been a part of bringing that future about (even if LLMs are currently in the ascendancy over logical reasoning and neural nets), and that would now be playing heavily on my conscience.
Mustafa Suleyman (with Michael Bhaskar) may be wrestling with similar dilemmas in his book The Coming Wave: Technology, Power and the Twenty-First Century's Greatest Dilemma. Suleyman is a key player in the AI field, having been one of the founders (with Dennis Hasibis and Shane Legg) of neural net-based game-changing (pun intended) DeepMind, which hit the news in 2016 after beating Go champion Lee Sedol and world champion Ke Ji in 2017. DeepMind was sold to Google in 2014, and Suleyman became head of Applied AI, leaving in 2019 to join the parent company to work on policy after being placed on leave following questions about improper data-sharing between the NHS and Google. In 2022 Suleyman co-founded Inflection AI.
'The coming wave' denotes the profound changes that are about to occur in our societies due to the exponential advance of two technologies, AI and synthetic biology, changes that, whilst claiming not to be a determinist, Suleyman sees as now unstoppable, but actually inevitable in the context of the history of technological progress.
If you have been following the debate around AI, you will be familiar with the 'alignment problem' and Kurzweil's 'singularity', the point at which AI becomes superintelligent (AGI) and recursively self-improves, far surpassing our capabilities and very probably becoming impossible to control. Other books by expert commentators and popular Twitter accounts have been primarily concerned with this issue.
Suleyman is a little sceptical about the timeline of this event (popularly ~2045) and is far more concerned about the next decade and the emergence of an intermediate stage, ACI, 'artificial capable intelligence'. This he sees happening with the next generation of cutting-edge AIs, built on platforms (such as his own at Inflection AI) with a thousand times the compute of the system GPT-4 was trained on, leading to systems that perform highly complex serial general tasks.
As much as the cutting-edge provides incredible opportunities and presents major threats, Suleyman is also worried about the long tail, the open-source AIs, which anybody can experiment with. Hardware resources for hobbyists and entrepreneurs lacking venture capital injections are far more limited, but hardware improves rapidly, as does the efficiency of algorithms. Just as serious as AI is the rise of synthetic biology, largely stemming from the invention of CRISPR gene editing, with the ability to experiment on the desktop, currently unregulated, for anybody with a first degree-level understanding of biological science and $25k to spare.
For Suleyman, we don't have an immediate alignment problem; the problem lies far more with the human operators of the AIs and gene-editing tech. We have what he characterises instead as a 'containment problem'. This is far more than simply a technical problem; it is a legal, political, societal and cultural problem. Simple regulatory frameworks won't cut it.
Having characterised the problem, Suleyman devotes the first part of the book to a whistlestop history, arguing that technological proliferation has always happened with any useful technology but that this has led to many unintended and sometimes unfortunate consequences. Further, few limited attempts to contain it (e.g. printing press bans, the Luddites) have consistently failed, presenting us with a thorny problem today.
Suleyman follows this in the book's second part with a tour of the current capabilities of AI and synthetic biology and how they are likely to develop in the short and medium term, with ACI, new drugs, organ generation, and enhanced longevity. Combined together, these technologies will likely reshape medicine, materials, energy, manufacturing, agriculture, computing, robotics, human capabilities and lifespan. They are asymmetric (single systems can have disproportionate effects), hyper evolve, have omni-use and can become autonomous (with attendant eventual AGI risks). The incentives are now so high, both commercially and militarily, that further exponential AI development is now extremely difficult to contain.
The overall drift in the first two parts is to establish his argument that the exponential development and proliferation of these technologies are now inevitable, but in any case, they are desirable for the benefits they bring. I am prepared to accept that AI is now probably unstoppable; I am not so clear about synthetic biology.
One of the chief benefits, returned to throughout the book, is the pressing need to address climate change. Here is not the place to get into a debate about the precise quantification of the seriousness of climate change; suffice it to say that for this reader at least, an argument that we cannot solve any climate problems without the use of advanced AI, whatever the cost, and therefore AI is something we must continue to develop, is weak. We assume that AI will solve the problem, which is an assumption we can only assign a probability to.
With regard to synthetic biology, whilst the potential for new materials is valuable, public enthusiasm for genetically modified foods is marginal at best. There are continuing regulatory battles over the desires of corporations and governments to change the labelling of genetically modified foods because the public needs to be essentially hoodwinked into buying and consuming them. That's before we even get to synthetic meat and insect protein. The public is now becoming aware of how detrimental to their health processed foods are; synthetic foods are processed by definition.
We have just seen the first roll-out of experimental mRNA 'vaccines'. How well has that gone? The list of harms, the sudden deaths, the excess deaths, the numbers of people on disability benefits, and the drop in birth rates grows daily in highly vaccinated populations. We have recently learnt that vaccine samples contain unsafe levels of DNA fragments and an SV40 promoter, a matter of great concern to several oncologists. Public enthusiasm for injecting any more of these toxic products has dropped to near zero. If governments want to get them into arms in the future, they may have to mandate them.
Longevity gains would benefit few alive today, and then only the obscenely wealthy, and I don't see a clear moral longtermist argument either. Desirable, maybe; necessary, no. As for transhumanist technologies, this is, even for this atheist, 'playing god'. There is near-universal condemnation of, and revulsion towards, such experimentation.
Suleyman might well argue that the general public's views are simply down to ignorance, which brings us to the book's third part. He begins here with the argument that well-ordered technocratic liberal democracies are the ideal form of the state, but these are now fragile due to misinformation, distrust, inequality, and populism. His characterisation of populism is trite; he unreflectively views it as authoritarian demagoguery, contrasting it with democracy. Populism is participative democracy, where the plebs get a significant say in policy; it is opposed to the authoritarian diktats of elites and technocrats. Of course, that is not to say that it is not a significant threat to the cosy world of AI researchers and entrepreneurs, nor to determine the question of the optimal way to run a society such that all its members benefit.
Suleyman discusses how AI will amplify the threats: hacking, cyber-attacks, autonomous weaponry, deep fakes, electoral tampering, domestic terrorism, and the home production of chemical and biological weapons. It will automate jobs, creating mass unemployment and resentment. Corporations will become even more powerful, nation-states more authoritarian, or we could even see micro-states and enclaves forming.
This all leads Suleyman to the two outcomes he thinks are now inevitable (both can happen in different places) unless immediate drastic changes are made. The first is that everything spirals out of control, leading to anarchy, internal wars and fragmentation. The second is that, recognising the massive domestic terrorist and disruptive threat that AI and synthetic biology represent, states become totalitarian, leveraging technology to monitor every second of everybody’s life to stop threats from emerging (pre-crime). Almost all the technology required for this is now in place, and the beginnings of the total surveillance security state can be seen in China. We have either technology-enabled catastrophes or techno-dystopias.
Suleyman singles out Peter Thiel in the book as exemplifying the dangers of fragmentation, perhaps justified in that Thiel has shown interest in creating a libertarian enclave. What is missing in the book is any acknowledgement that Thiel was a key early funder of Deep Mind, an enabler of the success that Suleyman has enjoyed, owning more than a quarter of the shares in the company prior to the sale to Google.
The book's final section argues that we must find a solution with urgency because both alternatives are utterly unpalatable. Suleyman proposes such a solution in ten steps, summarised neatly in this table (the tenth step is implementing all of these together):
If you are looking at this and thinking there is not a cat's chance in hell of all that coming together internationally, then you will be sharing my views, and I suspect Suleyman's, too. In interviews, he does not come over as confident of success. Nevertheless, he argues passionately at the end of the book that we must do these things; we must contain AI and synthetic biology, for otherwise, humanity is stuffed. This happens before we get anywhere near AGI because it happens through human use and misuse of these technologies.
Suleyman argues throughout the book against "pessimism aversion". He believes tech leaders and governments are not taking these threats seriously enough. This reader needs no convincing; I have been concerned about all this for the past couple of years, attempting to convince others in debate that we don't need AGI for great harm to occur. I didn't learn much information I didn't know already from the book because I have read a great deal on these topics. However, I think anybody not so familiar with where we are and where we are heading will learn a great deal by reading it. They are unlikely to be pessimism-averse by the end of it.
I agree with Suleyman that this is one case in which open-source software will prove dangerous, but one has to contrast this with questions about who owns the closed source. Big Tech corporations run a close second to Big Pharma as the least trustworthy people on the planet. Who, in their right mind, would trust Microsoft, Alphabet, Meta or Baidu with anything? Apple and X are barely any more trustworthy.
At the moment, questions about 'safety' revolve around whether the AIs will hallucinate, say mean things, be racist or say anything that doesn't endorse woke ideology. Trusting an AI that, for example, outputs gender ideology without pointing out that it lacks any scientific basis does not bode well for objectivity in the future. It instead suggests that AI will be used as a tool for indoctrination. Suleyman, to his credit, relates a story concerning his attempts to form an AI Ethics Advisory Council at Google with a broad range of views representative of the general public. It dissolved in chaos within a week due to the invitation of a member with gender-critical views. There is a shortage of adults in the room.
These considerations are relatively trivial when contrasted with the far greater threats AI presents. Suleyman relates a story about an AI that was tasked to find poisons, and within 6 hours, it had identified 40,000 molecules as toxic as the most deadly chemical weapons. Considerations like this make it hard to argue that open-source models are a boon to humanity. How synthetic biology became available to Joe Public in his garage in the first place is an egregious oversight and needs to be curtailed without delay. Where are 'health and safety' when you actually need them?
Dealing with such issues is far easier than trying to overcome the rapacious greed of corporations or the competition between nations in a fractured world. Possibly the only thing that will encourage cooperation and suppress clandestine research is a significant shock where millions die. It is only a matter of time before such an event.
Suleyman acknowledges his role in bringing about the situation we now find ourselves in. Perhaps most noticeable in this context is the DeepMind Go victory, which generated a minor amount of interest here in the West but was watched by over 280 million in Asia. This was an event that precipitated intense Chinese interest in AI, which has now assisted it in implementing a fascist surveillance state, a society that Suleyman sees as very possible in our future. He singles out China in his ten steps to containment under 'Choke Points'. The Biden government has already implemented this by beginning the Chip Wars last year, starving China of GPUs and placing them six months behind the US in the new arms race. This has dramatically increased tensions between the two countries.
Suleyman never really addresses the question of how to solve the distrust that the general public now has in the legacy media, science and government. If he offers anything, it is the suggestion of citizen assemblies, but the reality is that his solutions require those citizen assemblies to reach the 'right' conclusions; that is, the public will have to accept further losses of freedoms and privacy or be forced into compliance because the stakes are so high. The reality, which the pandemic brought home to many of us, is that we are already a long way down the road to the digital panopticon here in the West. Only biometric digital IDs and CBDCs are now needed to potentially enslave us forever in a social credit system, and they will be here in two or three years. This is partly why there is so much distrust now. Citizen assemblies will not solve this. This prong, then, of Suleyman's fork is the most likely, in Western Europe, at least.
With all these threats and implications, he still shows signs of cakeism. He wants a highly unlikely united solution to contain the risks whilst still wishing to exploit these technologies for humanity's benefit and also for his own, having formed his new company (which he assures us is ethical). To be fair, he has worked behind the scenes on safety and policy at Google, as noted above, but does this excuse him for his own Oppenheimer-adjacent role in all this? By his admission, he only addresses the problems that will unfold in the next decade or so. If or when AGI arrives, all bets are off. Although he is undoubtedly not evangelising for AGI, others, such as Sam Altman, are. Suleyman's pioneering work will forever remain an important staging post along the way and implicated in the consequences, for better or worse.
This book has a subtext, but I cannot quite decide what it is. Beyond trying to warn us of what is coming, is it a veiled apology for the role he has played? A working out in print of his demons? Is he trying to convince himself as much as us that if he hadn't done it, others would have, assuaging his guilt? It's a good and important book; I recommend you read it and make your own mind up.