It’s been a year since ChatGPT was released and immediately caught on. This is the first time that I can remember where a system based around artificial intelligence has really fired the public’s imagination – and the subject of AI has remained in the headlines ever since.
This recent book is by one of the founders of DeepMind, one of the highest profile AI companies (best known for systems that can beat world champion players at the Chinese strategy game Go and help advance research into protein-related disease prevention).
The main point of the book is to highlight the risks posed by the simultaneous arrival of two revolutionary technologies: artificial intelligence / robotics and synthetic biology.
These technologies will be crucial in tackling the big challenges of the 21st century – especially the climate crisis, feeding a world of 8 billion people and an aging population. We will need improving technology just to maintain our current economic levels. The trouble is that they are also general use technologies. This means that they could be put to use doing a wide range of jobs (ultimately almost all?) and being better than humans at the tasks.
The most vivid way that the subject has been discussed is AI’s potential to become an artificial general intelligence – a system more intelligent than humans that could ultimately lead to the destruction of the human race itself.
The trouble is that would seem like being a long way off and slightly preposterous (think Terminator killer robots) meaning that most people would not see the need to act. The threat is much more imminent than most people think (within the space of a few years at worst) because to pose a real threat to our economic welfare it would only take what the author calls ACI, or “artificial capable intelligence”.
Sometimes when authors delve into speculation you think that they are being fanciful and indulging their own imagination. These ideas do not seem too far fetched and are delivered in a measured way without hysterics so hopefully they will be listened to and deeply considered by many people.
The biggest impacts are going to be to society. Beyond the risk to employment, reduction in cost means that the products that these technologies enable will be available to everyone regardless of their ideals or motivations. People will be using them for good and bad. Every threat that we currently see online will be multiplied; these technologies will be used by individuals, terrorists or political organisations looking to promote their cause. As we know, it only takes a single action to provoke repercussions around the world for decades.
Then there are the more mundane issues when machines manage greater aspects of everyday life – unintended consequences of errors, unexpected edge cases, exploitation of flaws in logic or programming.
The challenge is that the only institutions able to do anything on the scale required are nation state governments – using regulation to contain the technologies. This is an area where the book is particularly strong. Are governments up to the task of managing this level of technological change? There is already mistrust in politicians in many countries around the world. Despite this, regulation is starting to happen – both the US and the EU have initial legally binding legislation either in place or underway. The UK convened an AI Safety Summit where many countries agreed a non legally binding Declaration. See the links below for details.
The book then takes on a more serious tone as we start looking at some of the concrete steps that can be taken. These are proposals, for example, that would limit the power of an AI model by limiting the hardware it runs on, having in-depth audits so that we know what is going on and analysing what is being generated by synthetic biology. It is refreshing to see a book that can put forward solutions as well as just lay out the challenges.
Despite everything, there is still a whiff of hypocrisy here that I cannot shake. It is good that we have an AI industry insider with influence in the right circles who has evidently thought through the situation and seems to have the best of intentions. No doubt there are many others in the field who lack the same principles or are willing to play more fast and loose with the technology.
However, here is someone who has started one of the most successful AI companies, sold it to Google and no doubt pocketing a lot of money in the process – enough to ride out any wave that comes along. Having failed to get Google to legally commit to the safeguards he was trying to push on them, he sells to them anyway. One course of action could have been to refuse to sell to Google if they would not follow through on his ethical concerns. His next move is to start up another AI company. Then he writes a book about how bad his work could end up being for the rest of society. For me, this attitude reminds me of the attitudes around the 2008 financial crisis: “Private profit, public debt”. A few people make all the money and societies have to bail the system out when it all goes too far. Except this time there is no bail out big enough and by the time that governments realise one is necessary it will be too late.
Anyway, despite these reservations on principals, his arguments are clear, easy to digest and persuasive. It is the best book around to explain the threats posed by, and the ethics around, the coming convergence of super-powerful technologies that will have huge impacts on the way we work and live. It is a successful mix of “popular science” book likely to appeal to a broad audience, an outline of the issues posed and some practical actions that could be taken.
This is an essential read for anyone planning on living and working during the next couple of decades.
See also:
Regulation:
Practical AI podcast: Government regulation of AI has arrived
US Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence
EU AI Act: first regulation on artificial intelligence
The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023
The Future of Life Institute: “Steering transformative technology towards benefitting life and away from extreme large-scale risks.”
The Future of Life Institute: The Artificial Intelligence Act (overview)
The Future of Life Institute: The Artificial Intelligence Act (itself)
Books:
• Spare Cycles: Mini review: “The Rise of the Robots” by Martin Ford (audiobook version)
• Spare Cycles: Mini review: The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future
• Spare Cycles: Mini Review: Race Against The Machine
• Spare Cycles: Mini review: “The Second Machine Age” by Erik Brynjolfsson and Andrew McAfee (audiobook version)
• Spare Cycles: Mini review: “Sea of Rust” by C. Robert Cargill (audiobook version)
News:
• Wired: What OpenAI Really Wants
• YouTube: OpenAI DevDay, Opening Keynote
• Wired: What the hell just happened at OpenAI?
• The Guardian: OpenAI ‘was working on advanced model so powerful it alarmed staff’
PS:
Opinion of the WordPress AI Assistant before publication of this review 😀.