In 2023, Hinton quit his position at Google in May 2023 — in part to speak freely about AI dangers. He noted that Google had “acted very responsibly,” but thought it important to be able to discuss the risks of AI without considering how his statements might impact of a Big Tech employer. In his keynote at AI4, Hinton outlined several dire scenarios that could unfold as AI systems become more sophisticated and autonomous.
Hinton on AI’s superhuman learning capabilities
Hinton explained the digital nature of AI systems presents a formidable advantage when it comes to learning. “If you happen to have many different copies of exactly the same neural network… You can get one copy and show it one bit of data, get another copy and show it another bit of data, and you can average the gradients so both copies are benefiting from what each copy learned,” he said. “And if you have thousands of copies, then you can get thousands of times more knowledge into the system than if you just had one copy. That’s what these AI systems can do, and that’s why they’re going to be so much better than us.”
Other AI researchers such as Yan LeCun have a more conservative view on AI systems’ capabilities, arguing they are still far from achieving human-level intelligence or common sense reasoning. But Hinton points to the ability of modern AI systems to learn and share knowledge collectively across multiple instances. If people had that capacity, it “would be wonderful.” “Each person in this room could go off and do a course in a different subject, we could average the gradients, and then we’d all know what everybody learned.”
Potential risks: ‘Massive job’ losses and deliberate pandemics
As AI systems rapidly evolve, Geoffrey Hinton identifies several immediate threats spanning economic disruption, political manipulation, and cybersecurity vulnerabilities. While not all experts contend that AI technologies will result in substantial unemployment, Hinton believes otherwise. “There’s going to be massive job losses, just as the Industrial Revolution made machines stronger than us,” he said. And the job losses could “lead to a lot of political problems.”
Though enterprise AI remains at an early stage, the impact on employment in some ways may already be felt, as AI tools are augmenting the work of some professionals while limiting demand for some entry-level positions. Already, some positions like that of a paralegal and copywriter are seeing reduced demand. Earlier this year, the BBC ran a story that illustrated this trend in the copywriting industry. The article detailed the experience of Benjamin Miller (a pseudonym), who led a team of over 60 writers and editors at a tech company. Initially, the company introduced an AI system to generate article outlines. Within a few months, managers enlisted ChatGPT to draft entire articles, resulting in most of Miller’s team being laid off, leaving Miller as the sole editor who focused on “cleaning things up and making the writing sound less awkward, cutting out weirdly formal or over-enthusiastic language.” At the end of the saga, Miller, too, was let go.
On the risks of misinformation and election interference
In his talk, Hinton also voiced concerns about the threat of AI-generated misinformation, especially its potential to corrupt elections through deepfakes — artifical images and videos. To combat this issue, Hinton proposes an “inoculation” strategy, suggesting creating deliberate fake videos that conclude by explicitly stating their artificial nature, such as “This was fake. Trump never said that. It wasn’t even Trump.”
Hinton also briefly mentioned the development of lethal autonomous weapons and “deliberate pandemics, which are very scary.” Hinton has voiced concerns on that subject before, warning of the risk of such systems to “select and engage targets without human intervention.” He has also warned if any major military power pursues AI weapon development, such a scenario could trigger a global arms race.
Cybersecurity threats
Finally, Hinton raises significant concerns about the cybersecurity risks associated with advanced AI systems, particularly regarding the practice of open-sourcing AI models. He strongly argues against publicly releasing AI model weights.
Meta, for instance, has made open source AI a cornerstone of its AI strategy. To date, its models have been downloaded hundreds of millions of times. Hinton does not approve of the strategy of open-sourcing of large language models — a task that can cost upwards of $100 million. “I just think that’s crazy… Cyber criminals grab those weights and fine-tune it,” Hinton said. Most cybercrime syndicates couldn’t remotely afford to drop that kind of cash on R&D. But with open-sourced large language models, they could then “commit terrible cyber attacks.”
Hinton points out a noticeable gap in current regulations. “If you look at all the regulations and the European regulations, there’s a little clause… that says none of these regulations apply to military uses of AI.” Previously, Hinton has called for worldwide bans on AI-powered military robots.
Despite the array of potential threats, Hinton’s deeper concerns lie in the long-term implications of AI development itself. Once AI systems are “better than us,” he said, there may be little use in trying to wrest control from them. “And we’ll be sunk.” “”I think we should think very hard about how we’re going to design, control, and coexist with these AI systems,” he said.
But Hinton also sounded a note of optimism. “Do not forget that AI will be immensely helpful in areas of healthcare, which is why its development cannot be stopped,” he said.
Tell Us What You Think!